On Which Skills do Indian Universities Evaluate Software Engineering Students?
Universities conduct examinations to evaluate acquired skills and knowledge gained by students. An assessment of skills and knowledge levels evaluated during Software Engineering examinations is presented in this paper. The question items asked during examinations are analyzed from three dimensions that are cognitive levels, knowledge levels and knowledge areas. The Revised Bloom’s Taxonomy is used to classify question items along the dimensions of cognitive levels and knowledge levels. Question items are also classified in various knowledge areas specified in ACM/IEEE’s Computer Science Curricula. The analysis presented in this paper will be useful for software engineering educators to devise corrective interventions and employers of fresh graduates to design pre-induction training programs.
An assessment of the skills acquired and knowledge gained through a course on Software Engineering is equally useful to academicians as well as industry professionals. Academicians can use the results of the assessment to devise appropriate interventions in case of the assessment results do not conform to the set learning objectives. Employers of fresh graduates may use the results of the assessment to design pre-induction training programs.
One way to perform such an assessment is to analyze question papers used for conducting end-semester examinations because it includes the most relevant information required for such an assessment. An end-semester question paper is typically designed to test students on diverse range of skills such as to recall a learned topic or to apply a learned method to solve a particular problem. Further question papers include questions from all the knowledge areas that are expected to be covered in a course on Software Engineering.
In this paper we classify questions asked in an examination along three dimensions, namely, cognitive levels, knowledge levels and knowledge areas. The categories included in the Revised Bloom’s Taxonomy  are used to classify question items along the dimensions of knowledge and cognitive levels. Question items are also classified according to the topics included under various knowledge areas of Software Engineering defined in ACM/IEEE’s Computer Science Curricula 2013 [6, 1].
Ii Analysis framework
The classification framework used to analyze the question items is derived from two different sources. The main intention of the classification framework is to analyze question items from three different dimensions as shown in Figure 1. The first two dimensions are cognitive levels, knowledge levels as defined in Revised Bloom’s Taxonomy (RBT) . Further, each question item asked in Software Engineering examinations belongs to a particular topic or a course unit. Hence the topics that are covered under the Software Engineering knowledge areas of ACM/IEEE Computer Science Curricula 2013 are also included. The first two dimensions cover generic learning skills that educators intend to impart in students while the third dimension covers domain specific skills that employers expect from a fresh-graduate.
Ii-a Cognitive Levels
The cognitive process dimension in RBT is broadly organized in six different categories namely Remember, Understand, Apply, Analyze, Evaluate and Create as shown in Figure 2. The category Remember captures the activity of retrieving knowledge from long-term memory. The activities of recognizing and recalling information, objects and events belong to the Remember category. The category Understand means to construct the meaning out of the learning material presented in the form of either lectures or notes. The acts of interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining belong to the category Understand. The third category Apply refers to carry out or use a procedure in a given situation and it includes the acts of executing and implementing. The fourth category of Analyze refers to breaking down the learning material into its part and to relating parts to establish overall structure. The acts of differentiating, organizing and attributing are considered as analytic processes. The fifth category Evaluate means the acts of checking and making judgments based on some criteria. The last cognitive process category from RBT is Create and it means the acts of generating, planning and producing some product. Few example questions and their mapped cognitive levels are shown in Table I.
|Remember||What is quality assurance ? What are different parameters of quality?|
|Understand||Explain incremental process model with a suitable example.|
|Apply||How do you calculate Function Point(FP) and how it is used in estimation of a software project?|
|Analyze||Discuss and Compare Test driven development (TDD) and Design driven testing (DDT).|
|Evaluate||Which life cycle model would you follow for developing following project and why. (a) Library management system (b) A web application|
|Create||Develop a complete Use Case System for ATM machine.|
Ii-B Knowledge Levels
As shown in Figure 2, the knowledge dimension in RBT is classified into four categories of Factual, Conceptual, Procedural and Meta-cognitive knowledge. The factual information about specific terminologies (e.g., Products, Processes, Life Cycle models ) and basic elements that students must be well versed with are captured under the factual knowledge. The conceptual knowledge category includes the knowledge about classification categories, principles, models and theories. Some examples of conceptual knowledge are knowledge about life cycle models, and principle of modularity. The knowledge about procedures, methods, algorithms are included under the category of Procedural Knowledge. An example of procedural knowledge is methods for Object-Oriented Analysis (e.g., CRC Card). The last category meta-cognitive knowledge corresponds to knowledge about cognition itself and understanding one’s own cognitive abilities. Table II depicts the example questions and mapped knowledge level categories.
|Factual Knowledge||Explain the difference between software and hardware characteristics.|
|Conceptual Knowledge||Explain the waterfall life cycle model.|
|Procedural Knowledge||Explain how project scheduling and tracking is done for a software development project ?|
|Meta-cognitive Knowledge||No question is mapped to this category|
Ii-C Software Engineering Knowledge Area
A set of guidelines are specified in ACM/IEEE’s Computer Science curricula (CS2013) to design a undergraduate program in Computer Science. In CS2013, the knowledge body of Computer Science is organized into eighteen Knowledge Areas (KA). Each KA is further sub divided in various Knowledge Units (KU). Software Engineering is one of the eighteen KAs which is further subdivided into ten different KUs as shown in Figure 2. In this paper, we have selected the CS2013 as a reference knowledge body with an intention to bridge the non-uniformity in course content of the courses on Software Engineering offered by various Indian Universities.
|Software Processes (SP)||Compare waterfall model and spiral model.|
|Software Project Management (SPM)||Describe project scheduling and tracking with any suitable example.|
|Tools and Environments (TE)||Explain Software Configuration Management in detail.|
|Requirement Engineering (RE)||Explain different steps in requirement engineering.|
|Software Design (SD)||List and explain the fundamental concepts for software design.|
|Software Construction (SC)||Compare conventional approach and object oriented approach to software development ? What are the advantages of OOAD ?|
|Software Verification and|
|Validation(SVV)||What is software testing ? Explain the software testing strategies.|
|Software Evolution (SE)||Define ”Program Evolution Dynamics”. Discuss the Lehman laws for program evolution dynamics.|
|Software Reliability (SR)||What do you understand by software reliability ?|
|Formal Methods (FM)||No question is mapped to this topic|
No. of Question Papers
No. of Question Items
|1.||Viswesarayya Technological University (VTU)||7||146|
|2.||Savitribai Phule Pune University (PU)||6||174|
|3.||Mumbai University (MU)||7||94|
|4.||Gujarat Technological University (GTU)||6||103|
|5.||Anna University (AU)||4||103|
|6.||West Bengal Technological University (WBTU)||3||68|
|7.||Punjab Technological University (PTU)||3||57|
|8.||Dr. Babasaheb Ambedkar Technological|
|Remember||Choose, Define, Find, How, Label, List, Match, Name, Omit, Recall, Relate, Select, Show, Spell, Tell, What, When, Where, Which, Who, Why|
|Understand||Classify, Compare, Contrast, Demonstrate, Explain, Extend, Illustrate, Infer, Interpret, Outline, Relate, Rephrase, Show, Summarize, Translate|
|Apply||Apply, Build, Choose, Construct, Develop, Experiment with, Identify, Interview, Make use of, Model, Organize, Plan, Select, Solve, Utilize|
|Analyze||Analyze, Assume, Categorize, Classify, Compare, Conclusion, Contrast, Discover, Dissect, Distinguish, Divide, Examine, Function, Inference, Inspect, List, Motive, Relationships, Simplify, Survey, Take part in, Test for, Theme|
|Evaluate||Agree, Appraise, Assess, Award, Choose, Compare, Conclude, Criteria, Criticize, Decide, Deduct, Defend, Determine, Disprove, Estimate, Evaluate, Explain, Importance, Influence, Interpret, Judge, Justify, Mark, Measure, Opinion, Perceive, Prioritize, Prove, Rate, Recommend, Rule on, Select, Support, Value|
|Create||Adapt, Build, Change, Choose, Combine, Compile, Compose,Construct, Create, Delete, Design, Develop, Discuss, Elaborate, Estimate, Formulate, Happen, Imagine, Improve, Invent, Make up, Maximize, Minimize, Modify, Original, Originate, Plan, Predict, Propose, Solution, Solve, Suppose, Test, Theory|
Iii The Analysis Method
The analysis of question papers is carried with an intention to answer the following questions.
(i) Do SE examinations test student for all cognitive skills?
This question is significant because students of an engineering under-graduate program are expected to be evaluated on higher order thinking skills such as Analysis and Synthesis rather than evaluating them on skills such as Remember and Understand.
(ii) For which kinds of knowledge students are tested during SE examinations?
Answering this question is important because certain courses contain a specific kind of knowledge. For example, the content of a course on Data Structure and Algorithms is of Procedural type while the majority of the contents of a course on Software Engineering is of Conceptual type. The question items asked in an examination should reflect this tacit assumption.
(iii) Do SE examinations give sufficient coverage to all the knowledge
This question is significant to answer because it verifies that an examination sufficiently covers all the knowledge units that are expected to be covered or whether it is skewed towards a particular topic.
Majority of Indian Universities adopt the standardized test as an assessment method to test the knowledge and skills of the student enrolled for a particular course at an affiliated college. In standardized test, examinations are administered by universities. Students appear for an examination and they answer the same question paper delivered at the same time across various affiliated colleges. The survey presented in this paper considers only those institutes which adopt standardized test as an assessment method. Autonomous colleges conducting examinations specific to students enrolled in one particular institute are excluded from the survey. This section describes the main activities conducted during the survey.
Iii-A1 Collection of Question papers
Question items are the basic unit of analysis for the survey presented in this paper. Questions items are collected from end-semester examinations conducted by various universities. Most of the universities offer a course on Software Engineering during third year under-graduate programme in either Computer Engineering or Information Technology. Few universities from all the four regions of India are selected for the analysis. Easy accessibility of the question papers in public domain is the main criteria for selecting the universities. Most of the question papers included in the survey are downloaded from the official web sites of the universities. Some of the question papers are also downloaded from the web-sites[4, 2] hosting study material for engineering students. Question papers for the examinations held during last five years are used for the analysis. Table IV shows the details of the number of question papers selected and the total number of question items from the respective university.
Iii-A2 Preparation of Question Bank
A question bank in the form of a spreadsheet is prepared by picking up question items from the selected question papers. For each question item, the information about text of a question, name of a university, examination year, assigned categories i.e. knowledge, cognitive, and knowledge area is stored. The question bank includes about eight hundred questions asked in about forty question papers. Some of the question items are duplicated and not stored in the question bank because the same questions may be repeated in multiple examinations.
Iii-A3 Assignment of Categories
Each question item is classified into three different categories i.e. cognitive, knowledge type and knowledge units. To assign a cognitive category, the category wise list of action verbs prepared by Azusa Pacific University, California  shown in Table V is used. Table I shows the assignment of cognitive categories to few question items. The knowledge category is assigned by interpreting noun-phrase in the question item. The guidelines specified in  are used to classify the question items in various knowledge categories. Some of the guidelines used during interpretation are also described in Section II-B. The knowledge unit is assigned to a question item by interpreting the content of the question. About eight hundred question items are analyzed and categories are assigned from three different perspectives. A tool has been implemented in Java to assign cognitive level categories. Initially cognitive level categorization is manually performed by the first author which has taught a course on Software Engineering. Question items are also categorized through a tool and verified by the second and third authors.
Iv Results of the Analysis
This section describes the results of the analysis carried out after the assignment of various question items.
Cognitive Level Categorization
Table VI shows paper-wise analysis of question items as per the cognitive levels. Entries in the Table VI indicate percentage of the questions that belong to one particular cognitive category. For example, in Table VI, 6.25% questions are asked to test the students for the skill of in an examination with the paper ID . All the paper-wise cognitive analyses are merged to find the average values for the cognitive categorization as shown in Figure 3. In summary, students are tested for cognitive categories in the order of Understand (58.44%), Remember(19.02%), Apply(7.56%), Create (6.05%), Analyze (5.04%), Evaluate (3.90%).
Knowledge Level Categorization
Table VII shows paper-wise analysis of question items according to knowledge types. Entries in the Table VII indicate percentage of the questions that belong to one particular type of knowledge. For example, in Table VII, 31.25% questions are asked to test the students for the type of knowledge in an examination with the paper ID . All the paper-wise knowledge level analyses are merged to find the average values level distribution as shown in Figure 4. In general, Indian universities test students for types of knowledge in the order of Conceptual (40.43%), Procedural(37.15%) and Factual(22.42%).
Distribution across the Knowledge Areas
Table VIII shows paper-wise analysis of question items distributed across the knowledge units. Entries in the Table VIII indicate percentage of the questions that belong to one particular knowledge unit. For example, in Table VIII, 25% questions are asked to test the students for the unit on in an examination with the paper ID . All the paper-wise analyses are merged to find the average values for distribution of question items across various knowledge units as shown in Figure 5. In general, Software Design (SD), Software Project Management and Software Processes are three most favored knowledge units to test software engineering specific skills. Surprisingly no university tests their students for knowledge of in a course on Software Engineering.
V Related Work
We have presented a survey on skill and knowledge levels assessed through software engineering examinations conducted by Indian universities. Categories from the Revised Bloom’s Taxonomy are used to perform the analysis of question items. To the best of our knowledge, this might be the first attempt of conducting such a kind of survey in the context of Indian Universities. However, the RBT has been extensively applied by earlier researchers for various purposes. In this section, we present a brief review of applications of RBT in software engineering education and its applications in conducting examinations.
In , authors propose a question paper preparation system based on content-style separation principles. The purpose of the system was to generate questions belonging to different categories of the Bloom’s taxonomy. A generic visual model for an automated examination system has been proposed in  using UML as a modeling language. The system is generic in the sense that it can be configured according to the requirements of an institution. Furthermore, the model provides performance analysis of students. The authors present a report on a multi-institutional investigation into the reading and comprehension skills of novice programmers. The Bloom’s and SOLO taxonomies are used to analyze the results of a programming exercises carried out by students at a number of universities. A rule-based classification scheme to analyze question items using Bloom’s taxonomy is presented in . The authors pointed out that effectiveness of such classifier systems is one of the concerns while classifying question items according to Bloom’s Taxonomy.
Unlike these earlier applications of RBT, in this paper, we combine RBT and software engineering specific knowledge areas and use it as the framework to analyze question items. By adding SE knowledge areas in RBT, the analysis framework becomes more relevant to assess software engineering specific skills.
The paper presents a qualitative assessment of question items collected from end semester examinations for the course on Software Engineering conducted by various Indian Universities. While analyzing the question items, some of the challenges relate with the use of tools and the action-verbs list used during cognitive categorization. Some action verbs appear in more than than one category. For example, the action verb appears in categories: Remember, Apply, Evaluate, and Create. So, it becomes difficult to categorize question items only on the basis of action verbs. In such situations, the context of a question needs to be taken into consideration for the appropriate categorization of the question item.
Combining the RBT framework with domain specific knowledge areas is the main highlight of the analysis method used in this paper. We found that the Revised Bloom’s Taxonomy (RBT) is a useful framework to assess generic skills and knowledge levels tested. But it is inadequate to test domain specific skills in general and Software Engineering specific skills in particular. To overcome this limitation of RBT framework, we extended it by adding Software Engineering specific knowledge areas. The second highlight of the paper is the creation of a classified question bank of about eight hundreds questions from the discipline of software engineering. This question bank in which each question item is classified as per cognitive and knowledge categories can also be used to test the performance and effectiveness of any automated tool implemented for categorization of question items
The results of the analyses presented in this paper can be used to design an advanced course on Software Engineering by universities or to design pre-induction training programs by software development organizations.
-  https://www.acm.org/education/cs2013-final-report.pdf.
-  http://www.allsubjects4you.com/ptu-b.tech-cse-question-papers.htm.
-  http://www.apu.edu/live_data/files/333/blooms_taxonomy_action_verbs.pdf.
-  http://www.stupidsid.com/study-resources/documents/university-papers.htm.
-  Kah Ming Boon and Lian Tze Lim. An examination question paper preparation system with content-style separation and bloom’s taxonomy categorisation. In The Third International Conference on E-Learning and E-Technologies in Education (ICEEE2014), pages 39–47. The Society of Digital Information and Wireless Communication, 2014.
-  Association for Computing Machinery (ACM) Joint Task Force on Computing Curricula and IEEE Computer Society. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science. ACM, New York, NY, USA, 2013. 999133.
-  David R Krathwohl. A revision of bloom’s taxonomy: An overview. Theory into practice, 41(4):212–218, 2002.
-  Muhammad Rashid Naeem, Weihua Zhu, Adeel Akbar Memon, and Muhammad Tahir. Improving automatic exams using generic uml model for better analysis and performance evaluation. American Journal of Systems and Software, 2(2):50–55, 2014.
-  Nazlia Omar, Syahidah Sufi Haris, Rosilah Hassan, Haslina Arshad, Masura Rahmat, Noor Faridatul Ainun Zainal, and Rozli Zulkifli. Automated analysis of exam questions according to bloom’s taxonomy. Procedia-Social and Behavioral Sciences, 59:297–303, 2012.
-  Jacqueline L Whalley, Raymond Lister, Errol Thompson, Tony Clear, Phil Robbins, PK Kumar, and Christine Prasad. An australasian study of reading and comprehension skills in novice programmers, using the bloom and solo taxonomies. In Proceedings of the 8th Australasian Conference on Computing Education-Volume 52, pages 243–252. Australian Computer Society, Inc., 2006.