On Which Skills do Indian Universities Evaluate Software Engineering Students?

On Which Skills do Indian Universities Evaluate Software Engineering Students?

Hansaraj S. Wankhede, Sanil S. Gandhi, Arvind W Kiwelekar Department of Computer Engineering
Dr. Babasaheb Ambedkar Technological University
Lonere, Maharashtra, India
Email: {hswankhede@dbatu.ac.in, sanil.gandhi15@gmail.com, awk@dbatu.ac.in}
Abstract

Universities conduct examinations to evaluate acquired skills and knowledge gained by students. An assessment of skills and knowledge levels evaluated during Software Engineering examinations is presented in this paper. The question items asked during examinations are analyzed from three dimensions that are cognitive levels, knowledge levels and knowledge areas. The Revised Bloom’s Taxonomy is used to classify question items along the dimensions of cognitive levels and knowledge levels. Question items are also classified in various knowledge areas specified in ACM/IEEE’s Computer Science Curricula. The analysis presented in this paper will be useful for software engineering educators to devise corrective interventions and employers of fresh graduates to design pre-induction training programs.

I Introduction

An assessment of the skills acquired and knowledge gained through a course on Software Engineering is equally useful to academicians as well as industry professionals. Academicians can use the results of the assessment to devise appropriate interventions in case of the assessment results do not conform to the set learning objectives. Employers of fresh graduates may use the results of the assessment to design pre-induction training programs.

One way to perform such an assessment is to analyze question papers used for conducting end-semester examinations because it includes the most relevant information required for such an assessment. An end-semester question paper is typically designed to test students on diverse range of skills such as to recall a learned topic or to apply a learned method to solve a particular problem. Further question papers include questions from all the knowledge areas that are expected to be covered in a course on Software Engineering.

In this paper we classify questions asked in an examination along three dimensions, namely, cognitive levels, knowledge levels and knowledge areas. The categories included in the Revised Bloom’s Taxonomy [7] are used to classify question items along the dimensions of knowledge and cognitive levels. Question items are also classified according to the topics included under various knowledge areas of Software Engineering defined in ACM/IEEE’s Computer Science Curricula 2013 [6, 1].

Ii Analysis framework

The classification framework used to analyze the question items is derived from two different sources. The main intention of the classification framework is to analyze question items from three different dimensions as shown in Figure 1. The first two dimensions are cognitive levels, knowledge levels as defined in Revised Bloom’s Taxonomy (RBT) [7]. Further, each question item asked in Software Engineering examinations belongs to a particular topic or a course unit. Hence the topics that are covered under the Software Engineering knowledge areas of ACM/IEEE Computer Science Curricula 2013 are also included. The first two dimensions cover generic learning skills that educators intend to impart in students while the third dimension covers domain specific skills that employers expect from a fresh-graduate.

Fig. 1: Three domains in the Analysis Framework
Fig. 2: Classification Categories in the Framework

Ii-a Cognitive Levels

The cognitive process dimension in RBT is broadly organized in six different categories namely Remember, Understand, Apply, Analyze, Evaluate and Create as shown in Figure 2. The category Remember captures the activity of retrieving knowledge from long-term memory. The activities of recognizing and recalling information, objects and events belong to the Remember category. The category Understand means to construct the meaning out of the learning material presented in the form of either lectures or notes. The acts of interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining belong to the category Understand. The third category Apply refers to carry out or use a procedure in a given situation and it includes the acts of executing and implementing. The fourth category of Analyze refers to breaking down the learning material into its part and to relating parts to establish overall structure. The acts of differentiating, organizing and attributing are considered as analytic processes. The fifth category Evaluate means the acts of checking and making judgments based on some criteria. The last cognitive process category from RBT is Create and it means the acts of generating, planning and producing some product. Few example questions and their mapped cognitive levels are shown in Table I.

Cognitive Level

Question Item

Remember What is quality assurance ? What are different parameters of quality?
Understand Explain incremental process model with a suitable example.
Apply How do you calculate Function Point(FP) and how it is used in estimation of a software project?
Analyze Discuss and Compare Test driven development (TDD) and Design driven testing (DDT).
Evaluate Which life cycle model would you follow for developing following project and why. (a) Library management system (b) A web application
Create Develop a complete Use Case System for ATM machine.
TABLE I: Question mapping to Cognitive levels in RBT

Ii-B Knowledge Levels

As shown in Figure 2, the knowledge dimension in RBT is classified into four categories of Factual, Conceptual, Procedural and Meta-cognitive knowledge. The factual information about specific terminologies (e.g., Products, Processes, Life Cycle models ) and basic elements that students must be well versed with are captured under the factual knowledge. The conceptual knowledge category includes the knowledge about classification categories, principles, models and theories. Some examples of conceptual knowledge are knowledge about life cycle models, and principle of modularity. The knowledge about procedures, methods, algorithms are included under the category of Procedural Knowledge. An example of procedural knowledge is methods for Object-Oriented Analysis (e.g., CRC Card). The last category meta-cognitive knowledge corresponds to knowledge about cognition itself and understanding one’s own cognitive abilities. Table II depicts the example questions and mapped knowledge level categories.

Knowledge Category

Question Item

Factual Knowledge Explain the difference between software and hardware characteristics.
Conceptual Knowledge Explain the waterfall life cycle model.
Procedural Knowledge Explain how project scheduling and tracking is done for a software development project ?
Meta-cognitive Knowledge No question is mapped to this category
TABLE II: Question mapping to Knowledge Category levels in RBT

Ii-C Software Engineering Knowledge Area

A set of guidelines are specified in ACM/IEEE’s Computer Science curricula (CS2013)[1] to design a undergraduate program in Computer Science. In CS2013, the knowledge body of Computer Science is organized into eighteen Knowledge Areas (KA). Each KA is further sub divided in various Knowledge Units (KU). Software Engineering is one of the eighteen KAs which is further subdivided into ten different KUs as shown in Figure 2. In this paper, we have selected the CS2013 as a reference knowledge body with an intention to bridge the non-uniformity in course content of the courses on Software Engineering offered by various Indian Universities.

Knowledge Unit

Question Item

Software Processes (SP) Compare waterfall model and spiral model.
Software Project Management (SPM) Describe project scheduling and tracking with any suitable example.
Tools and Environments (TE) Explain Software Configuration Management in detail.
Requirement Engineering (RE) Explain different steps in requirement engineering.
Software Design (SD) List and explain the fundamental concepts for software design.
Software Construction (SC) Compare conventional approach and object oriented approach to software development ? What are the advantages of OOAD ?
Software Verification and
Validation(SVV) What is software testing ? Explain the software testing strategies.
Software Evolution (SE) Define ”Program Evolution Dynamics”. Discuss the Lehman laws for program evolution dynamics.
Software Reliability (SR) What do you understand by software reliability ?
Formal Methods (FM) No question is mapped to this topic
TABLE III: Question item mapping to Knowledge Units of Knowledge Area in ACM/IEEE’s Computer Science Curricula 2013

Sr. No.

University

No. of Question Papers

No. of Question Items

1. Viswesarayya Technological University (VTU) 7 146
2. Savitribai Phule Pune University (PU) 6 174
3. Mumbai University (MU) 7 94
4. Gujarat Technological University (GTU) 6 103
5. Anna University (AU) 4 103
6. West Bengal Technological University (WBTU) 3 68
7. Punjab Technological University (PTU) 3 57
8. Dr. Babasaheb Ambedkar Technological
University (DBATU) 3 49
TABLE IV: Question Paper Data Collection

Category

Action Verbs

Remember Choose, Define, Find, How, Label, List, Match, Name, Omit, Recall, Relate, Select, Show, Spell, Tell, What, When, Where, Which, Who, Why
Understand Classify, Compare, Contrast, Demonstrate, Explain, Extend, Illustrate, Infer, Interpret, Outline, Relate, Rephrase, Show, Summarize, Translate
Apply Apply, Build, Choose, Construct, Develop, Experiment with, Identify, Interview, Make use of, Model, Organize, Plan, Select, Solve, Utilize
Analyze Analyze, Assume, Categorize, Classify, Compare, Conclusion, Contrast, Discover, Dissect, Distinguish, Divide, Examine, Function, Inference, Inspect, List, Motive, Relationships, Simplify, Survey, Take part in, Test for, Theme
Evaluate Agree, Appraise, Assess, Award, Choose, Compare, Conclude, Criteria, Criticize, Decide, Deduct, Defend, Determine, Disprove, Estimate, Evaluate, Explain, Importance, Influence, Interpret, Judge, Justify, Mark, Measure, Opinion, Perceive, Prioritize, Prove, Rate, Recommend, Rule on, Select, Support, Value
Create Adapt, Build, Change, Choose, Combine, Compile, Compose,Construct, Create, Delete, Design, Develop, Discuss, Elaborate, Estimate, Formulate, Happen, Imagine, Improve, Invent, Make up, Maximize, Minimize, Modify, Original, Originate, Plan, Predict, Propose, Solution, Solve, Suppose, Test, Theory
TABLE V: Action Verbs [3] for Cognitive Categories in RBT

Iii The Analysis Method

The analysis of question papers is carried with an intention to answer the following questions.

(i) Do SE examinations test student for all cognitive skills?
This question is significant because students of an engineering under-graduate program are expected to be evaluated on higher order thinking skills such as Analysis and Synthesis rather than evaluating them on skills such as Remember and Understand.

(ii) For which kinds of knowledge students are tested during SE examinations?
Answering this question is important because certain courses contain a specific kind of knowledge. For example, the content of a course on Data Structure and Algorithms is of Procedural type while the majority of the contents of a course on Software Engineering is of Conceptual type. The question items asked in an examination should reflect this tacit assumption.

(iii) Do SE examinations give sufficient coverage to all the knowledge units?
This question is significant to answer because it verifies that an examination sufficiently covers all the knowledge units that are expected to be covered or whether it is skewed towards a particular topic.

Iii-a Activities

Majority of Indian Universities adopt the standardized test as an assessment method to test the knowledge and skills of the student enrolled for a particular course at an affiliated college. In standardized test, examinations are administered by universities. Students appear for an examination and they answer the same question paper delivered at the same time across various affiliated colleges. The survey presented in this paper considers only those institutes which adopt standardized test as an assessment method. Autonomous colleges conducting examinations specific to students enrolled in one particular institute are excluded from the survey. This section describes the main activities conducted during the survey.

Iii-A1 Collection of Question papers

Question items are the basic unit of analysis for the survey presented in this paper. Questions items are collected from end-semester examinations conducted by various universities. Most of the universities offer a course on Software Engineering during third year under-graduate programme in either Computer Engineering or Information Technology. Few universities from all the four regions of India are selected for the analysis. Easy accessibility of the question papers in public domain is the main criteria for selecting the universities. Most of the question papers included in the survey are downloaded from the official web sites of the universities. Some of the question papers are also downloaded from the web-sites[4, 2] hosting study material for engineering students. Question papers for the examinations held during last five years are used for the analysis. Table IV shows the details of the number of question papers selected and the total number of question items from the respective university.

Iii-A2 Preparation of Question Bank

A question bank in the form of a spreadsheet is prepared by picking up question items from the selected question papers. For each question item, the information about text of a question, name of a university, examination year, assigned categories i.e. knowledge, cognitive, and knowledge area is stored. The question bank includes about eight hundred questions asked in about forty question papers. Some of the question items are duplicated and not stored in the question bank because the same questions may be repeated in multiple examinations.

Iii-A3 Assignment of Categories

Each question item is classified into three different categories i.e. cognitive, knowledge type and knowledge units. To assign a cognitive category, the category wise list of action verbs prepared by Azusa Pacific University, California [3] shown in Table V is used. Table I shows the assignment of cognitive categories to few question items. The knowledge category is assigned by interpreting noun-phrase in the question item. The guidelines specified in [7] are used to classify the question items in various knowledge categories. Some of the guidelines used during interpretation are also described in Section II-B. The knowledge unit is assigned to a question item by interpreting the content of the question. About eight hundred question items are analyzed and categories are assigned from three different perspectives. A tool has been implemented in Java to assign cognitive level categories. Initially cognitive level categorization is manually performed by the first author which has taught a course on Software Engineering. Question items are also categorized through a tool and verified by the second and third authors.

PaperID Reme-mber Under-stand Apply Analy-ze Evalu-ate Create
MU2014S 6.25 43.75 18.75 0.00 25.00 6.25
MU2013S 0.00 61.54 7.69 0.00 15.38 15.38
MU2012S 0.00 72.73 9.09 0.00 0.00 18.18
MU2014W 26.67 53.33 13.33 6.67 0.00 0.00
MU2013W 0.00 38.46 23.08 23.08 15.38 0.00
MU2012W 7.69 76.92 7.69 0.00 0.00 7.69
MU2011W 0.00 69.23 15.38 7.69 0.00 7.69
PU2014W 20.00 66.67 6.67 3.33 3.33 0.00
PU2013W 13.33 76.67 3.33 3.33 3.33 0.00
PU2012W 10.00 60.00 10.00 10.00 3.33 6.67
PU2014S 28.57 60.71 3.57 3.57 3.57 0.00
PU2013S 3.57 53.57 14.29 10.71 3.57 14.29
PU2011S 3.57 64.29 21.43 3.57 7.14 0.00
VU2014W 0.00 76.19 23.81 0.00 0.00 0.00
VU2013W 29.41 58.82 5.88 0.00 5.88 0.00
VU2012W 0.00 85.71 0.00 14.29 0.00 0.00
VU2011W 8.00 80.00 0.00 0.00 4.00 8.00
VU2012M 13.04 78.26 0.00 8.70 0.00 0.00
VU2013M 13.64 77.27 4.55 0.00 4.55 0.00
VU2014M 20.83 58.33 0.00 0.00 4.17 16.67
GU2014W 0.00 70.59 11.76 5.88 0.00 11.76
GU2013W 5.56 61.11 11.11 11.11 0.00 11.11
GU2014S 17.65 58.82 0.00 17.65 0.00 5.88
GU2013S 17.65 52.94 17.65 5.88 5.88 0.00
GU2012S 0.00 41.18 11.76 11.76 0.00 35.29
GU2011S 11.76 70.59 5.88 0.00 0.00 11.76
AU2014S 36.00 52.00 4.00 0.00 0.00 8.00
AU2013S 31.03 62.07 6.90 0.00 0.00 0.00
AU2013W 41.38 27.59 0.00 10.34 3.45 17.24
AU2012W 15.00 55.00 15.00 0.00 10.00 5.00
WBTU2013 33.33 33.33 11.11 5.56 11.11 5.56
WBTU2012 19.35 51.61 9.68 3.23 6.45 9.68
WBTU2011 36.84 36.84 5.26 10.53 0.00 10.53
PTU2010S 38.89 44.44 0.00 5.56 5.56 5.56
PTU2009W 76.19 23.81 0.00 0.00 0.00 0.00
PTU2009S 38.89 61.11 0.00 0.00 0.00 0.00
DBATU2015S 25.00 50.00 8.33 0.00 8.33 8.33
DBATU2014W 22.22 55.56 0.00 16.67 5.56 0.00
DBATU2014S 31.58 57.89 0.00 5.26 5.26 0.00
TABLE VI: Paper wise Analysis Cognitive Categorization (% distribution)
PaperID Factual Concep-tual Proced-ural Meta-Cognitive
MU2014S 18.75 50.00 31.25 0.00
MU2013S 0.00 61.54 38.46 0.00
MU2012S 0.00 63.64 36.36 0.00
MU2014W 26.67 53.33 20.00 0.00
MU2013W 7.69 53.85 38.46 0.00
MU2012W 15.38 69.23 15.38 0.00
MU2011W 0.00 53.85 46.15 0.00
PU2014W 6.67 66.67 26.67 0.00
PU2013W 6.67 60.00 33.33 0.00
PU2012W 3.33 43.33 53.33 0.00
PU2014S 39.29 25.00 35.71 0.00
PU2013S 10.71 39.29 50.00 0.00
PU2011S 7.14 53.57 39.29 0.00
VU2014W 4.76 28.57 66.67 0.00
VU2013W 17.65 47.06 35.29 0.00
VU2012W 14.29 35.71 50.00 0.00
VU2011W 24.00 20.00 56.00 0.00
VU2012M 26.09 43.48 30.43 0.00
VU2013M 40.91 22.73 36.36 0.00
VU2014M 25.00 33.33 41.67 0.00
GU2014W 5.88 47.06 47.06 0.00
GU2013W 11.11 50.00 38.89 0.00
GU2014S 41.18 17.65 41.18 0.00
GU2013S 17.65 58.82 23.53 0.00
GU2012S 11.76 35.29 52.94 0.00
GU2011S 35.29 41.18 23.53 0.00
AU2014S 40.00 36.00 24.00 0.00
AU2013S 34.48 27.59 37.93 0.00
AU2013W 34.48 20.69 44.83 0.00
AU2012W 20.00 55.00 25.00 0.00
WBTU2013 38.89 27.78 33.33 0.00
WBTU2012 29.03 41.94 29.03 0.00
WBTU2011 21.05 31.58 47.37 0.00
PTU2010S 50.00 16.67 33.33 0.00
PTU2009W 61.90 14.29 23.81 0.00
PTU2009S 16.67 27.78 55.56 0.00
DBATU2015S 33.33 33.33 33.33 0.00
DBATU2014W 33.33 44.44 22.22 0.00
DBATU2014S 21.05 63.16 15.79 0.00
TABLE VII: Paper wise Analysis Knowledge Categorization (% distribution)
PaperID SP SPM TE RE SD SC SVV SE SR FM
MU2014S 25.00 31.25 12.50 6.25 25.00 0.00 0.00 0.00 0.00 0.00
MU2013S 0.00 25.00 25.00 16.67 16.67 0.00 16.67 0.00 0.00 0.00
MU2012S 0.00 33.33 11.11 22.22 33.33 0.00 0.00 0.00 0.00 0.00
MU2014W 14.29 28.57 7.14 14.29 14.29 7.14 7.14 7.14 0.00 0.00
MU2013W 8.33 33.33 16.67 8.33 8.33 16.67 8.33 0.00 0.00 0.00
MU2012W 8.33 25.00 8.33 25.00 25.00 0.00 8.33 0.00 0.00 0.00
MU2011W 0.00 33.33 8.33 16.67 16.67 8.33 8.33 0.00 8.33 0.00
PU2014W 20.00 30.00 0.00 0.00 30.00 0.00 20.00 0.00 0.00 0.00
PU2013W 23.33 23.33 3.33 3.33 26.67 0.00 20.00 0.00 0.00 0.00
PU2012W 23.33 23.33 3.33 3.33 26.67 0.00 20.00 0.00 0.00 0.00
PU2014S 25.00 21.43 7.14 10.71 14.29 0.00 14.29 3.57 3.57 0.00
PU2013S 14.29 28.57 3.57 10.71 17.86 3.57 21.43 0.00 0.00 0.00
PU2011S 17.86 32.14 3.57 3.57 28.57 0.00 14.29 0.00 0.00 0.00
VU2014W 25.00 5.00 0.00 20.00 25.00 0.00 15.00 5.00 5.00 0.00
VU2013W 23.53 17.65 0.00 17.65 17.65 5.88 5.88 5.88 5.88 0.00
VU2012W 28.57 14.29 0.00 14.29 7.14 0.00 14.29 7.14 14.29 0.00
VU2011W 20.00 16.00 8.00 16.00 24.00 4.00 0.00 4.00 8.00 0.00
VU2012M 26.09 17.39 0.00 17.39 13.04 0.00 13.04 0.00 13.04 0.00
VU2013M 22.73 22.73 0.00 13.64 18.18 0.00 9.09 4.55 9.09 0.00
VU2014M 25.00 12.50 0.00 12.50 25.00 4.17 8.33 0.00 12.50 0.00
GU2014W 35.29 17.65 5.88 11.76 17.65 0.00 11.76 0.00 0.00 0.00
GU2013W 22.22 27.78 11.11 0.00 22.22 0.00 16.67 0.00 0.00 0.00
GU2014S 17.65 23.53 5.88 29.41 11.76 0.00 11.76 0.00 0.00 0.00
GU2013S 18.75 31.25 6.25 6.25 18.75 0.00 12.50 0.00 6.25 0.00
GU2012S 29.41 11.76 0.00 5.88 35.29 0.00 17.65 0.00 0.00 0.00
GU2011S 29.41 17.65 5.88 5.88 11.76 5.88 17.65 5.88 0.00 0.00
AU2014S 24.00 12.00 8.00 12.00 20.00 4.00 20.00 0.00 0.00 0.00
AU2013S 6.90 17.24 6.90 13.79 24.14 6.90 20.69 0.00 3.45 0.00
AU2013W 13.79 17.24 3.45 24.14 17.24 3.45 20.69 0.00 0.00 0.00
AU2012W 21.05 15.79 0.00 21.05 21.05 0.00 21.05 0.00 0.00 0.00
WBTU2013 18.75 18.75 6.25 12.50 25.00 6.25 6.25 0.00 6.25 0.00
WBTU2012 3.33 16.67 0.00 13.33 23.33 3.33 30.00 3.33 6.67 0.00
WBTU2011 5.88 23.53 17.65 5.88 5.88 0.00 29.41 0.00 11.76 0.00
PTU2010S 11.11 22.22 0.00 22.22 5.56 0.00 33.33 0.00 5.56 0.00
PTU2009W 4.76 23.81 4.76 9.52 9.52 4.76 19.05 0.00 23.81 0.00
PTU2009S 16.67 22.22 11.11 5.56 27.78 5.56 5.56 0.00 5.56 0.00
DBATU
2015S 18.18 27.27 0.00 9.09 27.27 0.00 9.09 0.00 9.09 0.00
DBATU
2014W 12.50 25.00 0.00 6.25 43.75 6.25 6.25 0.00 0.00 0.00
DBATU
2014S 11.76 11.76 0.00 23.53 35.29 5.88 11.76 0.00 0.00 0.00
TABLE VIII: Paper wise Analysis for Knowledge Areas (% distribution)

Iv Results of the Analysis

This section describes the results of the analysis carried out after the assignment of various question items.

Cognitive Level Categorization

Table VI shows paper-wise analysis of question items as per the cognitive levels. Entries in the Table VI indicate percentage of the questions that belong to one particular cognitive category. For example, in Table VI, 6.25% questions are asked to test the students for the skill of in an examination with the paper ID . All the paper-wise cognitive analyses are merged to find the average values for the cognitive categorization as shown in Figure 3. In summary, students are tested for cognitive categories in the order of Understand (58.44%), Remember(19.02%), Apply(7.56%), Create (6.05%), Analyze (5.04%), Evaluate (3.90%).

Fig. 3: cognitive level wise contribution

Knowledge Level Categorization

Table VII shows paper-wise analysis of question items according to knowledge types. Entries in the Table VII indicate percentage of the questions that belong to one particular type of knowledge. For example, in Table VII, 31.25% questions are asked to test the students for the type of knowledge in an examination with the paper ID . All the paper-wise knowledge level analyses are merged to find the average values level distribution as shown in Figure 4. In general, Indian universities test students for types of knowledge in the order of Conceptual (40.43%), Procedural(37.15%) and Factual(22.42%).


Fig. 4: Knowledge category wise contribution

Distribution across the Knowledge Areas

Table VIII shows paper-wise analysis of question items distributed across the knowledge units. Entries in the Table VIII indicate percentage of the questions that belong to one particular knowledge unit. For example, in Table VIII, 25% questions are asked to test the students for the unit on in an examination with the paper ID . All the paper-wise analyses are merged to find the average values for distribution of question items across various knowledge units as shown in Figure 5. In general, Software Design (SD), Software Project Management and Software Processes are three most favored knowledge units to test software engineering specific skills. Surprisingly no university tests their students for knowledge of in a course on Software Engineering.


Fig. 5: Knowledge Unit wise contribution

V Related Work

We have presented a survey on skill and knowledge levels assessed through software engineering examinations conducted by Indian universities. Categories from the Revised Bloom’s Taxonomy are used to perform the analysis of question items. To the best of our knowledge, this might be the first attempt of conducting such a kind of survey in the context of Indian Universities. However, the RBT has been extensively applied by earlier researchers for various purposes. In this section, we present a brief review of applications of RBT in software engineering education and its applications in conducting examinations.

In [5], authors propose a question paper preparation system based on content-style separation principles. The purpose of the system was to generate questions belonging to different categories of the Bloom’s taxonomy. A generic visual model for an automated examination system has been proposed in [8] using UML as a modeling language. The system is generic in the sense that it can be configured according to the requirements of an institution. Furthermore, the model provides performance analysis of students. The authors[10] present a report on a multi-institutional investigation into the reading and comprehension skills of novice programmers. The Bloom’s and SOLO taxonomies are used to analyze the results of a programming exercises carried out by students at a number of universities. A rule-based classification scheme to analyze question items using Bloom’s taxonomy is presented in [9]. The authors pointed out that effectiveness of such classifier systems is one of the concerns while classifying question items according to Bloom’s Taxonomy.

Unlike these earlier applications of RBT, in this paper, we combine RBT and software engineering specific knowledge areas and use it as the framework to analyze question items. By adding SE knowledge areas in RBT, the analysis framework becomes more relevant to assess software engineering specific skills.

Vi Conclusion

The paper presents a qualitative assessment of question items collected from end semester examinations for the course on Software Engineering conducted by various Indian Universities. While analyzing the question items, some of the challenges relate with the use of tools and the action-verbs list used during cognitive categorization. Some action verbs appear in more than than one category. For example, the action verb appears in categories: Remember, Apply, Evaluate, and Create. So, it becomes difficult to categorize question items only on the basis of action verbs. In such situations, the context of a question needs to be taken into consideration for the appropriate categorization of the question item.

Combining the RBT framework with domain specific knowledge areas is the main highlight of the analysis method used in this paper. We found that the Revised Bloom’s Taxonomy (RBT) is a useful framework to assess generic skills and knowledge levels tested. But it is inadequate to test domain specific skills in general and Software Engineering specific skills in particular. To overcome this limitation of RBT framework, we extended it by adding Software Engineering specific knowledge areas. The second highlight of the paper is the creation of a classified question bank of about eight hundreds questions from the discipline of software engineering. This question bank in which each question item is classified as per cognitive and knowledge categories can also be used to test the performance and effectiveness of any automated tool implemented for categorization of question items

The results of the analyses presented in this paper can be used to design an advanced course on Software Engineering by universities or to design pre-induction training programs by software development organizations.

References

  • [1] https://www.acm.org/education/cs2013-final-report.pdf.
  • [2] http://www.allsubjects4you.com/ptu-b.tech-cse-question-papers.htm.
  • [3] http://www.apu.edu/live_data/files/333/blooms_taxonomy_action_verbs.pdf.
  • [4] http://www.stupidsid.com/study-resources/documents/university-papers.htm.
  • [5] Kah Ming Boon and Lian Tze Lim. An examination question paper preparation system with content-style separation and bloom’s taxonomy categorisation. In The Third International Conference on E-Learning and E-Technologies in Education (ICEEE2014), pages 39–47. The Society of Digital Information and Wireless Communication, 2014.
  • [6] Association for Computing Machinery (ACM) Joint Task Force on Computing Curricula and IEEE Computer Society. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science. ACM, New York, NY, USA, 2013. 999133.
  • [7] David R Krathwohl. A revision of bloom’s taxonomy: An overview. Theory into practice, 41(4):212–218, 2002.
  • [8] Muhammad Rashid Naeem, Weihua Zhu, Adeel Akbar Memon, and Muhammad Tahir. Improving automatic exams using generic uml model for better analysis and performance evaluation. American Journal of Systems and Software, 2(2):50–55, 2014.
  • [9] Nazlia Omar, Syahidah Sufi Haris, Rosilah Hassan, Haslina Arshad, Masura Rahmat, Noor Faridatul Ainun Zainal, and Rozli Zulkifli. Automated analysis of exam questions according to bloom’s taxonomy. Procedia-Social and Behavioral Sciences, 59:297–303, 2012.
  • [10] Jacqueline L Whalley, Raymond Lister, Errol Thompson, Tony Clear, Phil Robbins, PK Kumar, and Christine Prasad. An australasian study of reading and comprehension skills in novice programmers, using the bloom and solo taxonomies. In Proceedings of the 8th Australasian Conference on Computing Education-Volume 52, pages 243–252. Australian Computer Society, Inc., 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
181933
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description