INFORMATION

Psicothema was founded in Asturias (northern Spain) in 1989, and is published jointly by the Psychology Faculty of the University of Oviedo and the Psychological Association of the Principality of Asturias (Colegio Oficial de Psicología del Principado de Asturias).
We currently publish four issues per year, which accounts for some 100 articles annually. We admit work from both the basic and applied research fields, and from all areas of Psychology, all manuscripts being anonymously reviewed prior to publication.

PSICOTHEMA
  • Director: Laura E. Gómez Sánchez
  • Frequency:
         February | May | August | November
  • ISSN: 0214-9915
  • Digital Edition:: 1886-144X
CONTACT US
  • Address: Ildelfonso Sánchez del Río, 4, 1º B
    33001 Oviedo (Spain)
  • Phone: 985 285 778
  • Fax: 985 281 374
  • Email:psicothema@cop.es

Psicothema, 2005. Vol. Vol. 17 (nº 1). 164-168




DECISION AIDING TOOL FOR UNIVERSITY SUBJECTS. D.A.T.U.S.

Orfelio G. León and Hilda Gambara

Universidad Autónoma de Madrid

In the first part we present a tool (D.A.T.U.S) to help students to put in order the optional subjects they wish to take when not all subjects can be chosen. Conflicting interests, such as affinity with tastes vs. easiness to pass, or teacher vs. places available, as well as the number of alternatives, mean that the task is complex and help is required. D.A.T.U.S. provides the student with a set of criteria from which to choose those that best fit his or her case and a set of scales for evaluating the subjects. Taking into account the importance the decision maker attributes to the chosen criteria, the tool will provide a final value that will put the subjects in order. The second part of the study is devoted to assessing the validity (face, content and convergent) of D.A.T.U.S. With the participation of 98 final-year undergraduates, we obtained convergent validity values around .89. (Key words: Decision Aid, University Counseling, Validity Decision Aid, Applied Decision Analysis).

Herramienta para la elección de asignaturas optativas universitarias. D.A.T.U.S. Presentamos, en la primera parte, la construcción de una herramienta (D.A.T.U.S.) que sirve de ayuda a los estudiantes cuando tienen que matricular sus asignaturas optativas. La existencia de intereses contrapuestos (atractivo vs. facilidad de aprobar, profesor vs plazas disponibles) así como la cantidad de materias posibles, justifican que la tarea es compleja y que la ayuda es pertinente. D.A.T.U.S. proporciona al estudiante un conjunto de criterios para que escoja aquellos que prefiera, así como escalas para valorar las materias. Teniendo en cuenta la importancia que cada usuario concede a los criterios, la herramienta proporciona una valoración final que ordena las asignaturas. La segunda parte se dedica a evaluar la validez del instrumento (aparente, de contenido y convergente). Con la participación de 98 estudiantes de último año, obtuvimos valores de validez convergente de alrededor de .89. (Palabras clave: Ayudas a la decisión, Asesoría universitaria, Validez de la ayuda a la decisión, Análisis de decisión aplicado).

PDF

First issue

The decision context. The context that produces the need to develop an aid to decision making is the choice of optional subjects in the university curriculum. Since the reform of undergraduate programs, students in Spanish universities have been allowed to choose the subjects they study at certain points in their degree course, as is the case in the majority of countries. The choice of subjects is guided by the interests of the decision makers, which include the intrinsic appeal of the subject, its applicability to the labor market, the prestige of the teaching staff and, finally, the level of difficulty for passing the subject in question. These values may be mediated by issues such as the availability of places on the course, timetabling or exam date compatibility. Many students complain that the choice is made difficult by conflicting factors: the teacher they like does not teach at the times they are scheduled to attend class, some subjects appeal but are not their specialty, subjects that may be advantageous in terms of the labor market are difficult to pass, and after all that they only discover whether places are available when they ask for matriculation, so that they often have to quickly choose new subjects. This situation has been observed over several years and in a variety of institutions. Students, directly or through their associations, have requested some type of counseling. One possible action is to counsel them so that they make their criteria explicit, put them in order, obtain the necessary information on several subjects and make their choice at home in good time. From decision making field we know that when a task is perceived as difficult, decision makers choose simple strategies that conduct to poor decisions (Iglesias, de la Fuente and Martín, 2000); also, difficulty can derive in an emotional conflict (Regueiro and León, 2003). On the other hand, we know that it is possible to teach how to make decisions (Gambara and León, 2002). From de literature, we see that decision aids in the academic field have concentrated on choice of career, and their criteria are not directly applicable to this context (Katz, 1980; Wooler and Lewis, 1982). Another possible action is to provide them with a tool that guides them in the decision process, that gives them clues, that helps them make judgments and that integrates their opinions in a final value that permits them to put the subjects in order. Making such an action possible constitutes the central objective of this work; the second objective is to provide evidence of the validity of the tool.

Second issue

The construction of the instrument (Objective 1). Our aim is to develop a tool to aid the ordering of optional university subjects prior to application. We have called this tool D.A.T.U.S.: Decision Aiding Tool for University Subjects. The aid will comprise four modules: a menu of criteria —«objectives», in the terminology of decisions— for choosing subjects; a set of scales —«attributes»— for judging the subjects; a weighting model for the objectives; and an aggregation model for all the values.

The result will be a decision analysis with a pre-configured structure, in which decision makers are provided with maximum help for making their decision with maximum independence. The usefulness of this approach has been pointed out by Keller and Ho (1988) in a review of studies on the use of generic problem structures and the role of computer aids for the structuring of the problem. Also, hierarchies of objectives as decision aids has had practical applications in several fields as marketing and oil exploration strategies (Dyer and Larsen, 1984).

The details of D.A.T.U.S. will be shown in the development of the first two modules. The objectives and attributes should adequately reflect the students’ interests. In order to comply with this a group of students from different degree courses will respond to a questionnaire (Keeney, 1992) in order to elicit the values that guide them in the choice of subjects. On the basis of this material we will create a super-tree that include all the objectives, integrated in the form of fundamental objectives as main branches and specific objectives as secondary branches. This tree will be created following the specifications of Keeney (1992): a combined hierarchy of objectives should include all the individual objectives, taking into account that those that are essentially similar should be aggregated. An example of an aggregated structure could be found in the assessment of the impact of mining activity on a virgin area (Gregory and Keeney, 1994). The purpose of offering the objectives already made explicit is that decision makers, on seeing the set, use those with which they identify, leaving out the rest. The reasons for presenting the pre-configured structure is that eliciting the fundamental objectives and expanding the specific ones, are time-consuming activities that are difficult to carry out without help; if the objectives are presented already organized, the student will not need to consult anyone. The process will be quick , and good measurement conditions will be assured. Subjects should be assessed according to level of fulfillment of the specific objectives. The decision maker may opt for an intuitive evaluation on a scale (0-10) or may use an ad hoc attribute provided in the tool, and which will analyze details of the corresponding specific objective. Weighting of the objectives will be carried out with a direct technique, that is, asking the decision maker to compare the different importance of each objective in relation to the essential objective of the decision. The model of aggregation of values will be that most commonly used in decision analysis: the simple additive model (von Winterfeldt and Edwards, 1986). This model aggregates in an additive way subjects’ evaluations weighted according to the weights of the objectives. The final value shall indicate an average for each subject, according to the level of achievement of the objectives made explicit by the decision maker.

Third issue

The validity of the tool (Objective 2). The validity of D.A.T.U.S. will be assessed in terms of the following aspects. First aspect: face validity (Objective 2.1). Does D.A.T.U.S. provide the ordering of a group of subjects taking into account the interests of the decision maker? In the decision maker’s opinion, does D.A.T.U.S. fulfil the function of helping the decision? Second aspect: content validity (Objective 2.2). Does D.A.T.U.S. take into account the tastes and interests of decision makers? Is all the relevant data for making the decision contained? Are the fundamental objectives shown by D.A.T.U.S. considered as such by the decision makers? Third aspect: convergent validity (Objective 2.3). Do the values of the optional subjects provided by D.A.T.U.S. correlate with the students’ reported satisfaction?

As convergent validity constitutes the main part of this section we briefly discuss this issue. What do we know about the convergent validity of decision aids? Studies of validity of the rating process in decisions involving multiple objectives (Multiattribute Utility Theory, MAUT) are reviewed in Fisher (1977) and von Winterfeldt and Edwards (1986, pp. 362-369). On comparing holistic judgments and judgements with MAUT —convergent validity— von Winterfeldt and Edwards’ summary of the reviewed works reports values of around .80. Subsequently, Morera and Budescu (1998), in a study of convergent validity, found, using the SMARTS technique (Edwards, 1977), an average value of .624, and using the AHP technique (Saaty, 1980), an average value of .448. However, these validity studies were carried out mainly in laboratory contexts, in which, normally, all participants respond to the same set of options presented by the researcher and all use the same set of attributes. This uniformity is aimed at controlling extraneous variables and increasing the internal validity of the research, but it reduces its external validity. Moreover, we are aware of how decision-makers change their behavior as a result of even the smallest modifications in the task (Payne, Bettman and Johnson, 1993). Would we expect the same convergent validity values when the elements of task are not the same for all subjects? (Objective 2.3).

In summary, the aim of this work is to address the following objectives: Objective 1: Elaboration of D.A.T.U.S. (Study 1). Objective 2: Evaluation of the validity of D.A.T.U.S. Objective 2.1: Face validity (Study 2). Objective 2.2: Content validity (Study 2).Objective 2.3: Convergent validity (Study 2).

Study 1

Method

Participants

Sixty final-year undergraduates from different degree courses who had to choose optional subjects in the psychology faculty. At random, thirty collaborated on phase one and two, and the rest on phase three. Mean age was approximately 23 years. Around 70% were males. All participated voluntarily.

Materials

The questionnaire for eliciting objectives designed by Keeney (1992), translated into Spanish and adapted to this context.

Procedure

There were three phases: one, generation of the super-tree; two, preparation of the attributes corresponding to the specific objectives; and three, test of the degree of understanding of the instructions.

Phase one: we proceeded to form the structure of objectives, on the basis of the responses to the questionnaire. The final form was (see Table 1): A first fundamental objective, decomposed in four specific ones; a second fundamental objective, decomposed in five specific ones; and a third fundamental objective, decomposed in two specific ones. The objectives were written in such a way that they could be suitable for any subject and any degree course, and could be used by decision makers themselves.

Phase two: taking the responses to the questionnaire, now our task was the inverse of phase one: we needed that each specific objective was made operative in an attribute with a series of observable and scalable facets, so that allowed us to evaluate how each subject fulfilled that objective. This form of evaluation involves a degree of effort and a level of meticulousness that not all students are prepared to invest, and we therefore decided to include and option that allowed a global and intuitive evaluation by means of a scale (0-10) (In Spain all students are familiar with the 0-10 scale, as it is used for academic grades.)

Phase three: it began with a group of ten new participants. They were given the super-tree and asked to apply it to three subjects they knew well. They were told to ask about anything they did not understand. All participants’ questions were recorded. On the basis of their doubts and questions we made the necessary modifications, generating a second draft super-tree. With this new structure, a new group of ten students was requested to do the same as the first group. Once again, we recorded problems or doubts and proposals for modifying the way the objectives were written. Incorporating these, we drew up the third version, which was tested with a further ten participants. On this occasion only one question was asked, and we therefore considered complete the process of drawing up the instructions and objectives with their attributes.

Results

Below we reproduce the super-tree of objectives and attributes of D.A.T.U.S.

A. Good training

A.1. Through programs whose content was: Novel with respect to content of major subjects (not at all= 0) (medium= 0.5) (very= 1); Applicable to world of work (not at all= 0) (medium= 0.5) (very= 1). Up-to-date: current references (not at all= 0) (medium= 0.5) (very= 1).

A.2. Through adequate participation of students in the learning process: Regular work set by the teacher (no= 0) (yes= 1); Students look for additional information to share with colleagues (no= 0) (yes= 1).

A.3. Through maximum congruence of the subject with the required profile: Clearly congruent with the required profile/s (no= 0) (some degree= 0.5) (yes = 1).

A.4. By facilitating incorporation into world of work: Information available from world of work demonstrating that course content is necessary (no= 0) (yes= 1).

B. Learning in most satisfactory way possible

B.1. Good teacher: Knows material well (no= 0) (yes= 1); Teaches with «feeling» (no= 0) (yes= 1); Makes effort to be understood (no= 0) (yes= 1); Establishes relationships between content and real world (no= 0) (yes= 1); Allows discussion of his/her approach (no= 0) (yes= 1); Fulfils formal obligations (no= 0) (yes= 1); Provides additional information (no= 0) (yes= 1); Not merely teacher but impassioned expert (no= 0) (yes= 1).

B.2.Organization of teaching oriented to quality: Ideal number of students for the subject (no= 0) (yes= 1); Adequate resources: board, books, OHP, videos, slides, computer (no= 0) (yes= 1); Sufficient number of credits awarded (no= 0) (yes= 1); Content different from other options (no= 0) (yes= 1); Links made to other subjects (no= 0) (yes= 1); Required level of previous knowledge made explicit (no= 0) (yes= 1).

B.3.Useful practicals: Well planned (no= 0) (yes= 1); Learn something (no= 0) (yes= 1); Time required reasonable (no= 0) (yes= 1); Assessed (no= 0) (yes= 1).

B.4.Fair assessment: Requirements in accordance with depth of material taught (no= 0) (yes= 1); Requirements in accordance with amount of material taught (no= 0) (yes= 1); Sufficient time (no= 0) (yes= 1); Questions easy to understand (no= 0) (yes= 1); Form of exam suited to material (no= 0) (yes= 1).

B.5. Something learned: Students that took the course gave their opinion: (Waste of time= 0); (Intermediate position= 0.5); (Really learned something= 1).

C. Finish degree within reasonable period

C.1. Compatibility of option with other activities: Compatibility of timetable with other subjects (incompatible= 0) (partial= 0.5) (total= 1); Compatibility of timetable with activities outside university (incompatible= 0) (partial= 0.5) (total= 1).

C.2. Easy to pass: Low level of requirement (no= 0) (medium= 0.5) (yes= 1); Material intrinsically easy (no= 0) (medium= 1) (yes= 2); Requires little time (reading, practicals, obligatory attendance) (no= 0) (yes= 1); Exam not programmed close to others in time (no= 0) (yes= 1).

(The complete application of DATUS also included a weighting of the objectives and aggregation of the values in a final score.)

Study 2

Method

Participants

One hundred final-year psychology students participated voluntarily in the study (two failed to complete the procedures). They had studied at least five optional subjects. Their ages ranged from 21 to 24 years, and approximately 75% were women. Confidentiality of the data was assured, and participants were offered the possibility of discussing the results in private after the study.

Procedure

The design in this study was a correlational one. One measure was the evaluation of the subjects with D.A.T.U.S. (D). The other was a global evaluation of satisfaction (S) with the subjects. The degree of covariation between the results of D and S was measured. D values were normalized to a 0-10 scale. Satisfaction (S) was measured on a subjective 0-10 scale. Measurements were recorded of: (a) Direct convergent validity by means of the Pearson correlation between D and S, for each participant; (b) Choice agreement, as defined in Buede and Maxwell (1995), by means of percentage of agreements for the alternative placed first by D and by scale S, for each participant; and (c) percentage of agreements for the alternative placed last by D and by the scale S, for each participant.

The 100 students were randomly assigned to two types of weight elicitation technique: 49 to SMARTS and 49 to GRAPA (León, 1997). We used to types of techniques to avoid that scores on validity were not confounded with the weighting procedure. In turn, in order to control the possible effect of the order, each group was divided at random into two subgroups to carry out the task in the orders S-D and D-S. The D was obtained in the following way:

Phase 1: Generation of the structure. (a) All students were provided with the super-tree of objectives of D.A.T.U.S. They were asked to select the objectives that fitted their interests and values, and to cross out the rest.

Phase 2: Rating of the alternatives. (a) They were asked to rate five elective courses already completed, and for which they had different degrees of preference. This phase was carried out in a collective way, though without permitting discussion among the participants; (b) They were asked to rate the satisfaction they obtained from having studied the subjects along the objectives selected by themselves on phase 1.

Phase 3: Elicitation of weights. (a) Participants were called one-by-one on subsequent days in order to assign weights to the objectives, in accordance with the technique to which they had been assigned (SMARTS or GRAPA). This stage was carried out according to the instructions of an expert. Its duration was approximately 50 minutes for the SMARTS technique and 30 minutes for GRAPA.

Phase 4: Aggregation of values. With the weights obtained in Phase 3 and the values of the alternatives from Phase 2, the results of the D were obtained by means of the simple additive model, for all participants.

Phase 5: Questionnaire on D.A.T.U.S. Once the results obtained with D.A.T.U.S. were known, participants responded in writing to three questions on the tool. The purpose of this was to complete the data on validity.

Results

Face validity: «Appropriate for the purposes at hand», Cone and Foster, (1993, p. 157). a) The D values obtained with D.A.T.U.S. ordered all the optional subjects for all participants. b) To Question 1 of the final questionnaire, «Do you think this tool helps to clarify preferences between the subjects?», 95/98 participants replied in the affirmative. To Question 2, «Do you think this tool is useful for people who are ranking subjects prior to choosing which ones to take?», 97/98 replied in the affirmative.

Content validity: «It contains the type of material appropriate to the variable being assessed», Cone and Foster, (1993, p. 157). a) To Question 3, «Do you think your criteria for differentiating between the subjects are reflected in this tool?», 97/98 participants replied in the affirmative. b) The complete structure of objectives was followed by 37 of the 98 participants. The remaining 61 decision-makers generated structures of different sizes; the degree of use they made of each specific objective is shown in Table 1. All decision-makers used the three global objectives proposed, and all used the specific objective B.5 «I really learned something.»

Convergent validity. «It relates to other ways of assessing the same behavior», Cone and Foster, (1993, p. 157). Previously to the main measurement of the validity, with a 2 x 2 ANOVA (order x weighting technique), we analyzed: (a) Whether there was an order effect, D-S vs. S-D (dependent variable: Fisher r-to-z transformation, for each decision-maker). We obtained an F(1, 94)= .093. We could therefore group the values without considering the order in which they were given; (b) Whether there were differences in validity associated with the SMARTS and GRAPA techniques. We obtained an F(1, 94)= .01. We could therefore group the values without considering the weight elicitation technique (there was no interaction between order and technique, F(1, 94)= .084). Thus, the results can also be referred to as a single set of 98 decision analyses.

Table 2 shows the results in the four conditions, order (2) x technique (2) and in the three dependent variables: (a) the values of the Pearson correlations between D and S; (b) agreement on the first-placed alternative: percentage of cases in which an optional subject has been placed first by D and by S; (c) agreement on the last-placed alternative: percentage of cases in which an optional subject has been placed last by D and by S.

The convergent validity (calculated as the mean correlation between D and S) for the group of 98 participants was .758. Degree of agreement for the first alternative was 64.3%, and for the last alternative, 78.6%. (If agreement were calculated for the first three alternatives —as a group— the value will rise to 87.3%).

Discussion

D.A.T.U.S. is a tool that actually helps to rank university subjects (face validity); the main core of D.A.T.U.S is a super-tree of values that suitably represents those of students (content validity). When we have all the relevant information for using the decision-making tool D.A.T.U.S., we can aspire to attaining very high convergent validity (around .90).

It can be stated that Objective 1 has been fulfilled, in that a specific aid for the selection of university subjects has been developed. Its use assumes that the decision maker has academic experience for selecting a set of objectives from among those shown to him/her by D.A.T.U.S. It also assumes that the user looks for the information necessary for assessing the degree of attractiveness of the subjects. Both the instructions and the way the objectives are written have been seen to be comprehensible for the intended type of user.

With regard to the second objective, in addition to the expected apparent validity, good levels of content and convergent validity were found. It should be borne in mind that the three fundamental objectives proposed in D.A.T.U.S. were chosen by 100% of participants, and that 97/98 were of the opinion that their criteria were well reflected in the objectives selected. There were doubts as to whether the convergent validity would be low due to the fact that each decision maker had his or her own set of alternatives and hierarchy of values. The central values obtained were 0.88-0.89; these values were similar to those of the higher performing studies among those reviewed by von Winterfeldt and Edwards (1986). This validity study avoids the danger, pointed out by Morera and Budescu (1998), of using small samples, since it employed 98 participants. When choice agreement is evaluated by the percentage in which the first-ranked option from D.A.T.U.S. is also the first-ranked in satisfaction, there is agreement in 64.3% of cases. In the data from our sample decision makers identify best the lowest-ranked alternative: there is 78.6% agreement on the worst option. This apparently low percentage of agreement on the first-ranked option can be explained in part by the fact of participants having chosen several alternatives simultaneously. According to Svenson (1992), post-decisional ambiguity is based on the degree of differentiation and consolidation prior to the decision. On choosing several alternatives, this differentiation is less necessary, and post-decisional consolidation will also be less necessary, which would explain why the first alternative does not dominate the others so clearly. We can also find an ad hoc explanation of response to the task we have worked with: most of the alternatives satisfy their decision-makers, and the differences between them are therefore not very large; however, when an alternative fails to please, it is rated in a clearly different, and more negative way.

In summary, from the data presented here it is deduced that, if the student has adequate information, we can be optimistic with regard to the utility of D.A.T.U.S. when s/he wishes to put in order a set of subjects (.88 to .89 validity). The problem of ordering a set of subjects taking into account conflicting interests is a fairly complex task for which university students require some type of aid. D.A.T.U.S is a tool that provides such aid, making a decision analysis on a structure of objectives that decision makers adjust to their individual cases.

Acknowledge

This research was supported by grant PB97-0041 from Programa Sectorial de Promoción del Conocimiento (M.E.C). We are grateful for comments by L. Robin Keller. I am especially grateful for the help of Susana García Sánchez in collecting the data and for her comments on the research.

Buede, D.M. and Maxwell, D.T. (1995). Rank disagreement: a comparison of multicriteria methodologies. Journal of Multi-Criteria Decision Analisys, 4, 1-21.

Cone, J.D. and Foster, S.L. (1993). Dissertation and theses from start to finish. Washington D. C.: A.P.A.

Dyer, J. and Larsen, J.B. (1984). Using multiple objectives to approximate normative models. Annals of Operations Research, 2, 39-58.

Edwards, W. (1977). How to use multiattribute utility measurement for social decision making. IEEE Transactions on System, Man and Cybernetics, SMC- 7, 326-340.

Fischer, G.W. (1977). Convergent validation of decomposed multi-attribute utility assessment procedures for risky and riskless decisions. Organizational Behavior and Human Performance, 18, 295-325.

Gambara, H. and León, O.G. (2002). Training and pre-decisional bias in a multiattribute decision task. Psicothema, 14, 233-238.

Gregory, R. and Keeney, R. (1994). Creating policy alternatives using stakeholder values. Management Science, 40, 1.035-1.048.

Iglesias, S., de la Fuente, I. and Martín, I. (2000). Efecto de las estrategias de decisión sobre el esfuerzo cognitivo. Psicothema, 12, 267-272.

Katz, M.R. (1980). SIGI: an interactive aid to career decision making. Journal of College Student Personnel, 21, 34-40.

Keeney, R. (1992). Value-focused thinking. Cambridge, MA: Harvard UP.

Keller, L.R. and Ho, J.L. (1988). Decision problem structuring: generating options. IEEE Transactions on Systems, Man and Cybernetics, 18, 715-728.

León, O.G. (1997). On the death of SMART and the birth of GRAPA. Organizational Behavior and Human Decision Processes, 71, 249-262.

Mellers, B.A., Schwartz, A. and Cooke, A.D.J. (1998). Judgment and decision making. Annual Review of Psychology, 49, 447-77.

Morera, O.F. and Budescu, D.V. (1998). A psychometric analysis of the «Divide and conquer» principle in multicriteria decision making. Organizational Behavior and Human Decision Processes, 75, 187-206.

Payne, J.W., Bettman, J.R. and Johnson, E.J. (1993). The adaptative decision maker. Cambridge: Cambridge University Press.

Regueiro, R. and León, O.G. (2003). Estrés en decisiones cotidianas. Psicothema, 15, 533-538.

Saaty, T.L. (1980). The analytic hierarchy process. New York: McGraw-Hill.

Stillwell, W.G., Barron, F.H. and Edwards, W. (1983). Evaluating credit applications: a validation of multiattribute weight elicitation techniques. Organizational Behavior and Human Performance, 32, 87-108.

Svenson, O. (1992). Differentiation and consolidation theory of human decision making: a frame of reference for the study of pre and post-decision processes. Acta Psychologica, 80, 143-168.

von Winterfeldt, D. and Edwards, W. (1986). Decision analysis and behavioral research. Cambridge: Cambridge UP.

Impact factor 2022:  JCR WOS 2022:  FI = 3.6 (Q2);  JCI = 1.21 (Q1) / SCOPUS 2022:  SJR = 1.097;  CiteScore = 6.4 (Q1)