The PolyU Student Feedback Questionnaire (SFQ) System

  Interpreting the Results from the Faculty/School-based Student Feedback Questionnaire (SFQ)  
  [Last modified September 2014]  
 

The Student Feedback Questionnaire (SFQ) is one of the formal channels at PolyU to collect student feedback on teaching and learning for both developmental and judgemental purposes. The SFQ results can be used by the teaching staff and programme/subject team to identify the strengths and weaknesses of a subject offered as well as the teaching of the staff member concerned for their reflections and improvements. They will also be used as one of the sources of evidence in judging a staff member’s teaching performance in the annual staff appraisal and in important personnel decisions regarding (re)appointments, tenure and promotion.

However, it should be noted that students’ ratings indicate only the perceptions of the students taking the subject, and they are not the only source of information about a staff member’s teaching performance or contribution (see Criteria for Evaluating Teaching Performance: A Concise Reference). There are other sources of teaching evidence, such as peer review results or teaching portfolios, that individual staff are encouraged to bring forward to staff appraisal meetings for discussion, or in support of their applications for contract renewals, promotions, or awards on the basis of their merits in teaching [see Operation Manual of The 2011 Framework for Appointment, Promotion and Retention of Academic Staff (HRO, 2012)].

The following sections explain how the SFQ results might be interpreted, and how the Faculty/School norms can be used to identify the relative strengths and weaknesses of staff’s teaching as reflected in the SFQ results.

 
 

Structure and Items of the SFQ

Understanding the SFQ Report

Guidelines for Interpreting the SFQ Results

Using the Faculty/School-based SFQ Norms

Cumulative Faculty/School-based SFQ Norms

What to Do Next for Improving Teaching?

EDC Contact Persons


Structure and Item of the SFQ

The SFQ consists of two sections, namely, Section I About the Subject and Section II About the Staff Member. The structure and items are outlined as follows:

Section I About the Subject

  • 4 standard items on students’ learning experience of the subject
  • 1 standard item on subject workload
  • 1 standard item on the average number of hours per week spent on studying the subject (for DSR subjects only)
  • A set of standard items on the achievement of learning objectives/intended learning outcomes (for GUR subjects only)
  • 2 standard open-ended questions to solicit students’ comments on their learning experience of the subject and how it can be improved
  • A maximum of 5 additional questions set by the Subject Leader (optional)

Section II About the Staff Member

  • 2 standard items on the overall view about the teaching of the staff member
  • A set of Faculty-based items endorsed by the respective Faculty Board 
  • 1 standard item on the use of the medium of instruction
  • 2 standard open-ended questions to solicit students’ comments on the teaching of the staff member
  • A maximum of 5 additional questions set by the individual staff member (optional)

In July 2014, the Academic Council (AC) and Learning and Teaching Committee (LTC) approved the change of paper-based SFQ to online SFQ (eSFQ) and its associated policy and guidelines (see Operational Guidelines for Implementing the eSFQ at PolyU); effective from 2014/15, all SFQs will be conducted online via the eSFQ system in replacement of the in-class, paper-based SFQ for all Discipline Specific Requirements (DSR) subjects and General University Requirements (GUR) subjects.

 
 

Back to top

Understanding the SFQ Report

The Educational Development Centre (EDC) is responsible for coordinating the implementation of the eSFQ, analysing the numerical data from the SFQ, and generating the SFQ report which shows the summary of statistical results for each staff member.

Essentially, the SFQ report shows a summary of the statistical results of the responses provided by the specific class of students to the various SFQ items on their learning experience of the subject and the teaching of the staff member. It has three main sections:

Information about the staff, subject, and administration

The first part of the report shows the background information about the staff member, the subject, and the administration details, including

  • the name and staff number of the staff member concerned, his/her employment status and departmental affiliation,
  • the programme code, subject name and code, and group number if applicable,
  • other pertinent background information about the subject/teaching, including:

- the programme and subject levels,
- the mode and nature of the subject,
- the language of instruction,
- the semester when the SFQ was administered,
- the number of students enrolled, the number of completed SFQ forms returned, and the response rate, and
- the focus of feedback and the part of teaching being evaluated.

It should be noted that SFQ results from classes with very low response rates (e.g., less than 30%) or a small number of responses (e.g. n≤5) should be interpreted and used with great caution especially in making judgments about the teaching performance of a staff member, as those results might be quite unreliable.

About the subject

The second part of the report shows:

  1. the summary statistics for the four standard closed-response type items on students’ learning experience of the subject, the set of standard items on the achievement of learning objectives/intended learning outcomes (for GUR subjects only) and the additional closed-response items set by the Subject Leader (if any)
  2. the percentage distributions of students’ response to the fifth item on subject workload and the sixth item on the average number of hours per week spent on studying the subject (for DSR subjects only).

For the four standard items on subject learning experience, the set of standard items on the achievement of learning objectives/intended learning outcomes (for GUR subjects only) and the additional closed-response items, the mean and standard deviation of the students’ ratings are reported. Moreover, the percentage distribution of the students' responses choosing “strongly agree”, “agree” and “no strong view” is plotted graphically on the right hand side of the item statistics.

For these items, the item scores range from 1 to 5, where a rating of 1 means strongly disagree, a rating of 3 means no strong view, and a rating of 5 means strongly agree. A low mean score (i.e. a score closer to 1) on any item implies that students generally disagree with the statement of the item, while a high mean score (i.e. a score closer to 5) implies that students generally agree with it.

The standard deviation shows the degree of variability of the students' responses to that particular item. A high standard deviation means that the students vary widely in their opinions, and a low standard deviation means that there is a high level of agreement among students in their views on the particular item.

The diagram on the right of the item score gives a graphical representation of the percentage distribution of the students' ratings in the “strongly agree”, “agree” and “no strong view” categories. A concentration of students' responses in the shaded portion of the bar (i.e. response categories of “agree” and “strongly agree”) signifies that most students tend to agree with that item. On the other hand, the “gap” between the bar and the vertical axis on the right represents the proportion of students choosing “disagree” or “strongly disagree”. The larger the gap, the higher the proportion of students disagreeing with the item.

For the standard items on subject workload and the average number of hours per week spent on studying the subject, it is not appropriate to compute the mean score and standard deviation. Thus, only a graphical representation of the percentage distribution of students’ responses in each of the response categories (i.e. too heavy, appropriate or too light) is shown for these two items.

About the staff member

The third part of the report shows:

  1. the summary statistics for the Faculty/School-based items on the teaching of the staff member, the two university-wide standard items on the overall view of the staff member’s teaching and their grand mean and the additional closed-response items set by the staff member (if any)
  2. the percentage distribution of students’ responses to the item on the use of English (or Putonghua for some specific classes) in teaching.

For the Faculty/School-based items on the teaching of the staff member, the two university-wide standard items on students’ overall view and the additional closed-response items (if any), the mean and standard deviation of the students’ ratings are reported. Moreover, the percentage distribution of the students' responses choosing “strongly agree”, “agree” and “no strong view” is plotted graphically on the right hand side of the item statistics.

For these items, the item scores range from 1 to 5, where a rating of 1 means strongly disagree, a rating of 3 means no strong view, and a rating of 5 means strongly agree. A low mean score (i.e. a score closer to 1) on any item implies that students generally disagree with the statement of the item, while a high mean score (i.e. a score closer to 5) implies that students generally agree with it.

The standard deviation shows the degree of variability of the students' responses to that particular item. A high standard deviation means that the students vary widely in their opinions, and a low standard deviation means that there is a high level of agreement among students in their views on the particular item.

The diagram on the right of the item score gives a graphical representation of the percentage distribution of the students' ratings in the “strongly agree”, “agree” and “no strong view” categories. A concentration of students' responses in the shaded portion of the bar (i.e. response categories of “agree” and “strongly agree”) signifies that most students tend to agree with that item. On the other hand, the “gap” between the bar and the vertical axis on the right represents the proportion of students choosing “disagree” or “strongly disagree”. The larger the gap, the higher is the proportion of students disagreeing with the item.

For the item on the use of English (or Putonghua) in teaching, a table showing the percentage distribution of students’ responses in each of the response categories (i.e. all or nearly all of the time, majority of the time, about half or less than half of the time) and the percentage of missing data is included, as the data are not amenable for the computation of mean scores and standard deviation.

Open-ended questions

The last part of the report shows the students’ responses to the standard and additional open-ended questions in Sections I and II. The comments will be grouped by section and question for the individual staff member’s perusal.

When going through students’ written feedback, it would be useful to look for recurrent comments/themes on aspects about the subject or teaching (e.g. heavy subject workload or the need to return graded assignments faster). Avoid reading too much into outlier comments as they may signify an individual concern of a few students only.

Do not take ad hominem or inappropriate remarks personally as such comments convey nothing more than individual and momentary disgruntlement; focus on how the subject or teaching can be improved in the future.

Back to top

Guidelines for Interpreting the SFQ Results

It should be remembered that the ratings shown in the report indicate only the general reactions of the students to the subject/teaching. They are not absolute or precise measures of the teaching effectiveness of the teacher/teaching, and thus should never be viewed as such. The results need to be interpreted with great caution, as some variations in the results across teachers can be expected because of measurement errors and/or factors that are beyond their control.

The following guidelines may be helpful when interpreting SFQ results:

  • Student feedback should be interpreted in context: the teaching context must be considered when feedback from a particular group of students is reviewed. Contextual factors may include class size, level and year of study, nature of the subject (core vs. elective, theoretical vs. applied, ...), nature of the teaching format (lecture vs. tutorial vs. practical or clinical sessions, ... ), etc. Such factors are often beyond the control of the staff member but nonetheless would influence the feedback of a particular class.
  • The numbers and figures should not be seen as an absolute measure of the teaching performance of the staff member. Small differences in the student ratings may not have any statistical or practical significance at all.
  • SFQ feedback is only one source of evidence for judging teaching performance. Other forms of feedback such as students' comments, staff-student discussions, External Examiner's Report, peer review results, etc. should also be taken into account.
  • SFQ results are best regarded as a rough indicator of students' experience of learning rather than a precise and objective measure of the teaching performance of a staff member. A low rating on a particular scale or item signals the need for further investigation rather than a hasty judgment and action. Very often, improvements require co-ordinated effort of several staff; and changing the context may be just as important as changing the behaviour of the individual staff member.
  • As classes taught by individual staff members are different, it is not very useful to crudely compare the ratings of one staff member with those of another without due recognition of their differences in context. For developmental purposes, the most appropriate way of using student ratings is to track a staff member's ratings over time, and to identify aspects which are causing increasing dissatisfaction or concern to students so that improvements can be made at appropriate times.
  • When passing evaluative judgment on the teaching performance of the staff member concerned in relation to important personnel decisions, it is more appropriate to consider student feedback from multiple classes collected over a period of time, and to base the judgment on the basis of a wider range of evidence included in a teaching portfolio as recommended in the PolyU guideline on teaching evaluation.

Using the Faculty/School-based SFQ Norms

What is a 'norm'?

Simply put, a norm is a set of average ratings derived by including in its computation a large number of cases in a specified reference group. The Faculty/School-based SFQ norms are given in a separate document entitled Cumulative Norms for SFQ Scores by Faculty/School. The norms are derived by taking the average of a large number of class-sets of student ratings on the SFQ collected over a sustained period of time. Separate norms are also developed for different comparison groups from different Faculties/Schools with different class sizes. The norms are revised regularly to provide updated information about the average ratings of the various reference groups.

Purposes of having norms

Norms are useful in two ways. First, they provide a reference group for relative comparisons; that is, how the ratings of a particular class compare to the ratings of other classes in the reference group. This may help the staff member to identify the relative strengths and weaknesses of his/her teaching, and areas for improvement. Second, they help the interpretation of possible biases in ratings caused by factors beyond the control of the staff member. With the development of separate norms for different teaching contexts (for example, class size), the influence of the factors that define the norm groups can be taken into account.

However, it must be stressed that the primary purpose of these normative comparisons is not to pass judgment on staff members but rather, to give them a sense of their relative performance; for self reflection and development. As we will see below, there are problems with using normative comparisons and so, extra caution should be taken in interpreting the results.

Considerations in normative comparisons

Normative comparisons are problematic because by the very nature of norms, about half of the staff members are below average in any reference group. This means that even a highly effective staff member may have a relatively lower standing if other staff members in the reference group are rated highly by the students. In other words, a lower than average rating does not necessarily mean that the staff member is ineffective or incompetent.

Furthermore, changes or improvements in ratings over time are difficult to interpret because there will be a corresponding change in the norms at the same time.

The appropriateness of the reference group used for comparison is another problem. Students' ratings are often affected by factors such as class size, level of study, and nature of the subject, etc. which are outside the control of the staff member. It would be futile to compare ratings between staff members teaching in completely different contexts.

The following points should be considered when making normative comparisons:

  • Avoid treating the norms as an absolute standard. Norms should never be viewed as a line of demarcation between pass and fail in teaching performance.
  • It is normal to expect variations in students’ ratings among staff members. Unless the ratings are significantly higher or lower than the averages (for example, when the score is more than 1 standard deviation above or below the norm, or lies below the 10th or above the 90th percentile scores of the norm), they may not have any statistical or practical significance.
  • Avoid using normative comparisons as the sole factor for judgment. Other sources of information such as students’ open-ended comments, test results, staff-student discussions, peer review results, etc. should also be taken into account.
  • It is more useful to use normative comparisons to reveal the relative strengths and weaknesses of the individual staff members, and to identify possible areas for improvement.
  • Choose an appropriate norm for comparison. As far as possible, compare the ratings of a specific class-set with the norm or reference group with matching class size, level of study, or nature of subject.

Understanding the tables of norms

Each table portrayed in the document entitled Cumulative Norms for SFQ Scores by Faculty/School shows the means, standard deviations, the minimum and maximum scores for each of the standardised close-response type items on the Faculty/School-based SFQ of the class-sets included as the reference group in its computation. It also shows the 10th, 25th, 50th, 75th and 90th percentile scores for each of the items, and the total number of class-sets of data included in the computation.

To establish the relative standing of the ratings of a staff member, select the most appropriate table of norms, compare each of the item scores of the staff member with the respective mean of the item in the table, and examine the extent of deviation (positive or negative) from the mean. Ratings significantly higher than the mean score in the table signifies the relative strengths of the staff member's teaching, while ratings significantly lower than the mean in the table suggest relative weaknesses.

Another way to identify the relative strengths and weaknesses of a staff member's teaching is to compare his/her ratings of the SFQ items with the corresponding percentile scores in the table. A rating below the 10th percentile score of the norm implies that the staff member's rating is among the bottom 10% of the classes in the reference group on that item. On the other hand, if a staff member's rating is above the 90th percentile score, it means that he/she ranks at the top 10% of the classes in the reference group on that item. Through these comparisons, it will be possible to establish the relative standing of the staff member's teaching for each of the SFQ items as compared to the reference group.

Back to top

What to Do Next for Improving Teaching?

Collecting student feedback is the very first step in improving teaching. However, the feedback results will not automatically lead to improvement unless the staff members concerned follow up with plans for development or improvement. This may involve an investigation into students’ problems and concerns, a systematic reflection on ones' teaching, discussions and sharing of experiences and insights with other colleagues, and development of an action plan for improvement. Very often, improvements require co-ordinated effort of several staff, and require changes in the curriculum and learning systems as well as the behaviour of the individual staff members. The support and encouragement from the department is critical in such attempts.

How to make use of the SFQ results to improve teaching?

Teachers may wish to consider the following steps, in planning teaching improvements.

1. Interpreting the SFQ scores
Study the SFQ report and compare the ratings with the appropriate norms to identify:

  • the relative strengths and weaknesses of the teaching, and
  • the aspects of teaching which are causing most concern to your students.

2. Finding out more about the students' views

Examine the students’ responses to the open-ended questions in order to understand more about:

  • the aspects of the teaching that the students found most helpful to their learning,
  • what changes in the teaching the students think may help them learn better, and
  • other comments or suggestions made by the students.

3. Developing an action plan for improvement

While the teaching per se may affect students' learning and generate some of their comments, other concerns may be the result of the design of courses, or subjects or other factors that are beyond the classroom teaching. Some of these factors may be outside the teacher’s control. Discussions with colleagues and students may help to clarify some of these factors, and suggest plans for development. Again, teachers may find it helpful to discuss their plan with colleagues in the department, or any EDC staff member.

4. Implementing the plan

It is important to monitor the effects of any changes made by collecting students’ (and colleagues’) feedback. As the changes made may involve or affect other members of the department, it is important to talk with them about the plans, and their outcomes.

EDC will also be happy to help or assist in any way that is useful.

How the EDC can help

EDC is given the responsibility to support and assist PolyU staff in teaching improvements. Colleagues may ask for the following EDC services if they need them:

  • help in the interpretation of the SFQ results,
  • assistance in other forms of collecting student feedback or evaluation,
  • advice and consultation on teaching, curriculum design and learning system design,
  • help in producing learning resources (print-based or multimedia) to support teaching and learning, and
  • advice and consultation on action learning projects on teaching improvements, etc.

In addition, EDC offers a number of short courses, workshops and seminars on university teaching. It also has a collection of practical books, monographs, and reference materials on teaching in university. These may provide useful ideas for teaching. Please approach any educational development officer in EDC for information or assistance.

EDC Contact Persons

Please contact the persons indicated below for assistance in interpreting SFQ results, planning teaching improvements based on the feedback, or devising alternative forms of evaluation.

For Contact

SFQ

Dr Christine Armatas, Senior Educational Development Officer 
Email: christine.armatas@polyu.edu.hk
Ext.: x6298
Room: TU608

Programme evaluation
Teaching & learning development project evaluation
Collecting formative feedback for improving teaching

Kannass Chan, Assistant Educational Officer 
Email: kannass.chan@polyu.edu.hk
Ext.: x6289
Room: TU606

Peer review & teaching portfolio

Barbara Tam, Educational Development Officer 
Email: barbara.tam@polyu.edu.hk
Ext.: x5108
Room: TU610

 

John Sager, Educational Development Officer 
Email: john.sager@polyu.edu.hk
Ext.: x5081
Room:  TU613

Back to top