Economists are often asked to explain or foresee the economic impact of certain events. Except for theoretical and practical knowledge, clear communication of views is therefore required. However, post-graduate training in Economics mostly focuses on technical modules. Furthermore, students often overestimate their (writing) abilities – as described in the Dunning-Kruger effect.
This article aims to establish if, and to what extent, perceptions of writing quality differ between students, subject-specific lecturers and writing consultants.
Honours students at a South African university wrote an argumentative essay on a specific macroeconomic policy intervention.
In this study qualitative samples (an evaluation rubric) were quantified for an in-depth analysis of the phenomenon, which allowed for a mixed-methods research design. The essays were evaluated by fellow students, the Economics lecturer, Academic Literacy lecturers and Writing Centre consultants and then their evaluations were compared. The evaluation form contained 83 statements relating to various aspects of writing quality.
Student evaluators in the peer review were much more positive than the other evaluators – in a potential confirmation of the Dunning-Kruger effect. However, despite the more generous evaluations, students were still able to distinguish between varying skills levels, that is, good and bad writing. Discrepancies in evaluations between the subject specialists were also observed.
More conscious effort needs to be put into teaching economics students the importance and value of effective writing, with clear identification of the requirements and qualities of what is considered to be effective writing.
Economists are often asked for opinions about the economic consequences of certain events. Examples include the potential impact of Brexit on South African exports to the United Kingdom (UK) and the potential impact of the US-China trade war on South African trade. To respond to such requests, economists have to analyse the situation, using subject knowledge and existing models, but they are then also required to clearly communicate their message, either in written format or through oral communication (interviews).
Curriculums for post-graduate studies in Economics mostly include technical modules and other applied fields of economic theory and although the field of economics is quite logically associated more with numbers than with writing ability, some scholars on economics make a strong case for honing writing skills. To be an economist, is to be a writer, or in the words of McCloskey (
Economics depends much more on the mastery of speaking and writing than on the mastery of engineering mathematics and biological statistics usually touted as the master skill of the trade. (p. 188)
McCloskey (
McCloskey is not alone in his opinion; therefore, the graduate attributes ascribed to by the Faculty of Economic and Management Sciences at the North-West University, include the following, very specific references to written communication:
In addition to the general outcomes of the B.Com Hons degree, the content of this qualification is structured in such a manner that specific exit levels (including the critical outcomes) will enable students to demonstrate the ability to
Specific module outcomes for honours in Economics also refer to the ability to analyse and disseminate information and present it in an ‘honours dissertation’ (written research report) and an oral presentation (NWU
Despite these explicit requirements regarding writing contained in outcomes, lecturers often experience that, students, when required to write longer texts in their final and honours years, illustrate a half-hearted outlook regarding the necessity of effective writing. Many fail to see the relevance of effective written communication and display an apathetic attitude towards acquiring or honing these skills. This may in part be due to large class sizes at undergraduate level, which necessarily limit written assignments as options of assessment. As a result, extended writing is seldom done at undergraduate level, which sends the unintended message to students that writing as a skill is not required. In addition, writing may be considered as the educational outcome of the so-called soft sciences (humanities); and therefore not considered to be necessary for the harder, numerical sciences.
Dunning (
The current study was therefore prompted in part by perceptions on part of the lecturers who experienced that students in economics had erroneous beliefs regarding their writing abilities; the lecturers wished to dispel these mythical self-evaluations. In addition, at face value, students with overall better performance in the subject seemed to be able to communicate better in writing about the subject content. This is already a classic manifestation of the so-called Dunning-Kruger effect (Kruger & Dunning
The activity reported on in this article can also be seen as an exercise in awareness raising (or ‘consciousness raising’ as in James
For this exercise a written assignment was evaluated on four levels: peer to peer, subject lecturer to student, Writing Centre consultant to student, and Academic Literacy lecturer to student (the full method is explained under methodology). Evaluations were done via a Google Forms interface, which yielded quantifiable data. The data were analysed and discussed with students to identify the most salient discrepancies in evaluation, and these discrepancies are also identified and discussed in this section. For the purposes of this article, an in-depth analysis of six of these texts is also included.
Although there are certain areas of agreement in evaluation, some specific areas find great discrepancies in evaluation among students, subject lecturers, Academic Literacy lecturers and writing consultants. These areas can be associated with the finesse of writing.
In the rest of this article an overview of the Dunning-Kruger effect is first presented, then what was defined as quality writing in this specific context is discussed, whereafter the full methodology is explained, followed by a discussion of the findings and their implications for the teaching of writing to students of economics.
The Dunning-Kruger effect explains that a person’s perception of his/her ability is rarely accurate (Kruger & Dunning
Peer evaluations, by contrast, tend to be much more accurate with regards to aspects such as leadership qualities, but as for an illustration of subject knowledge, an incompetent peer suffers from the same ‘unknown unknowns’ (Dunning
Grammar and argumentation formed the basis of a few of the experiments central to the Dunning-Kruger theory and in Dunning (
Performance estimates are also partly driven by perceived task difficulty. If something is perceived to be easier, a person tends to overestimate their ability and if something is perceived to be hard, they underestimate their ability (Dunning
Probably the most well-known findings from the research done on the Dunning-Kruger effect, can be summarised in the quote that poor performers are:
… doubly cursed: their lack of skill deprives them not only of the ability to produce correct responses, but also of the expertise necessary to surmise that they are not producing them. (Dunning et al.
This leads to incompetent students failing to estimate the level of their incompetence (those performing in the lower 25% extensively overestimate their performance [Dunning
Although Dunning (
Studies on the performance of South African Economics students are limited to the overall performance in specific modules like first year (Edwards
Only two studies were found investigating the Dunning-Kruger effect in a South African context. In Dodd and Snelgar (
While ‘quality writing’ is the topic of numerous books on its own (see McCloskey
The definition for a quality text used for the context of this article, is a few years in the making for the learners. All learners were required to take a compulsory course in Academic Literacy in their first year of study (about three years prior to the experiment for most learners in this study). In the Academic Literacy module, students were introduced to certain universally accepted qualities of effective academic writing (Adrianatos et al.
The marking scheme (evaluation rubric) for their written assessment was divided into five sections. The complete evaluation rubric is available as addendum, but an overview is provided below. Note that it falls outside the scope of this article to provide a full academic justification for the categories and also note that the concept of ‘quality’ with regards to written texts admittedly contains many more categories than those selected for this exercise. For obvious practical reasons, some limits had to be imposed on the number of qualities attended to.
Five categories of text quality were used in the marking rubric based on expert opinion for the inclusion of standard writing elements and the specific goals for the written assignment: title, language, paragraphing, macrostructure, and academic sources, each of which is elaborated on very briefly below.
The title of the text should be fit for purpose, clear and descriptive. Titles like ‘assignment 1’ or ‘essay’ are of insufficient quality. The ability to write a clear concise title illustrates the ability to state the main idea of a text succinctly.
The category of Language refers to objective measures (correctness), as well as subjective measures of quality or standard. Punctuation, spelling and grammar are in most cases objectively measured – a matter of right or wrong. Sentence length and structure are more subjective categories, referring to the readers’ expectations of and experience with the sentences – overly long and complex sentences are usually more difficult to read, while an overuse of very short sentences stunt reading and fluency and may create the impression that the writer is unable to ‘string together’ larger sections of meaning, which may imply a lack of subject knowledge or a lack of linguistic control.
Paragraphing is a factor of writing that seldom falls into place automatically, but takes conscious editing efforts. Well-structured paragraphs should focus on one main idea per paragraph and contain a clear topic sentence. Support for the main idea (examples, elaboration, explanations) should be present in the paragraph and these support statements should be clearly linked to the main idea and to each other. For this reason, the use of so-called linking devices are vital (linking devices are known by a number of synonyms, like cohesive devices, discourse markers, or transition markers).
The Macrostructure category refers to the organisation of information in the text. For this, each section has certain characteristics which usually occur in academic texts. For example, the introduction and conclusion each has specific qualities which need to be present. These qualities could be measured objectively (are they there?), as well as subjectively (are they good?). The introduction should at the very least contain a background which leads to a thesis. It should also contain an overview of the discussion to follow, as well as an overview of findings. The conclusion should in turn present a summative review of the text with a final verdict on the question investigated. The conclusion should not present new information.
The most subject-specific part of the text (still part of the macrostructure), is the text body in which the main argument is presented. For this, students were required to provide additional background, as well as an explanation of the concept under discussion.
The final evaluation category is the use of academic sources, once again subject to both objective evaluation (the correctness of referencing and bibliographical formats) and subjective evaluation, which refers to how well academic sources were used to support statements in the text and the argument in general.
The evaluation scheme was set up to allow the reader the ‘ease into’ the text and gradually build up to the more cognitively demanding aspects of textual evaluation. For example, while it is relatively easy to spot a typing error or identify a sentence with unclear meaning, it takes a lot more effort and subject knowledge to identify errors in the argument. In the terminology of the Dunning-Kruger effect, evaluation was started at supposedly ‘known-knowns’ (spelling) and moved to ‘known-unknowns’ (where a learner can see that, for example, a sentence is bothersome, but is unable to identify how to improve it) to ‘unknown-unknowns’ (a complete inability to identify an issue, which will become apparent when a student evaluates an argument as effective, while it is in fact incomplete). Awareness raising occurs when known-unknowns become clearer and unknown-unknowns are taught. This systematic approach in evaluation also mirrors to an extent the study by Williams, Dunning and Kruger (
A mixed method exploratory sequential design (qualitative followed by quantitative) was used to investigate the observed trend in-depth and to measure its occurrence (Creswell & Clark
Students wrote a long essay of 2000 words as part of their preparation for entering the annual National Budget Speech competition. The essay also formed part of their assessment plan for the semester. The requirements for the essays are outlined on the competition website, but can be summarised that students should argue for or against a specific economic policy intervention; in this particular case, quantitative easing (abbreviated as QE).
The completed essays were then evaluated by four different stakeholders using an evaluation form in question format on Google Forms. The content of the evaluation form is discussed above. The form was set up in multiple choice format, meaning that it contained a number of statements on which evaluators could respond by picking only one option (although additional comments could be added as general comments at the end of a section). This format was chosen as clearly expressed answer statements reduce marker uncertainty. Additionally, a feedback letter containing the selected answers was automatically generated giving the students immediate guidance on the issues in their manuscript. In total, 83 statements had to be evaluated.
Extract from digital evaluation form.
This technique allows for focused peer review as well as having the benefit of producing a data set which is easily quantifiable. For pedagogical purposes it was also connected to a form-fill letter which provided a feedback letter of approximately three pages in length to the student, with the letter containing the chosen comments. Students would therefore have been able to use this feedback to revise their work towards their second draft, but also to use the feedback form as general guidelines for future writing. The form was created by staff at the North-West University Writing Centre.
All submitted essays were first evaluated by Writing Centre consultants, which yielded 66 responses on the Google Form datasheet. The next evaluation was a peer review exercise using the same form. For the peer review, all students were assigned to evaluate the essays written by two peers and they received access to these files via electronic distribution. This was done with the intention of raising their awareness of the writing standard as well as the subject content in the essays of the peers. This part of the exercise provided 103 responses. Students were not randomly assigned to each other’s work, but were matched according to subject field, with students from the different fields (economics, international trade, and risk management) evaluating students taking similar subject combinations.
The next evaluator was a subject specialist – the lecturer for the subject. The lecturer was assigned to evaluate a selection of six essays, which were randomly selected from three bands of results – high, average and low academic achievement scores.
The final evaluators were lecturers from the subject group, Academic Literacy who also evaluated the selection of six essays. In this instance the Academic Literacy lecturers and the Writing Centre consultants act as the writing specialists, or non-subject specialist evaluators, with the aim of focusing on writing structure and style. Their evaluations therefore serve as the external standard.
Students were not required to do a self-evaluation on the Google Form, which would have provided interesting additional data. However, the expectation was that students would have utilised the form as a marking rubric before submission to evaluate their own work and, based on their self-evaluation, submitted an essay they deemed to be of good quality.
All participating students signed the standard consent form of the Writing Centre which allows the university to use submissions at the Writing Centre in anonymised research. The consent form also informed students on confidentiality and withdrawal procedures.
Two sets of analyses were done on the data.
In the first place, the data were observed holistically. In this instance, data for the evaluation for all 66 student essays are discussed. The data obtained for this set includes peer review and writing consultant review answers to the form questions. The second analysis is an in-depth analysis on just six student essays. These six essays were also evaluated with the same form, by the subject lecturer as well as a writing specialist.
All students were bona fide students who had moved directly from undergraduate studies to their honours degree. For the majority, English is their second language.
Comparing the evaluation between writing consultants and students.
Category of quality evaluated | Statement evaluated and number | Consultants | Students | Differ | |
---|---|---|---|---|---|
Configuration | The title is descriptive (A1) | 30.3 | 52.4 | 22.1 | |
Language | Punctuation | You had correct punctuation and spaces (A4) | 21.2 | 62.1 | 40.9 |
Language | Sentence structure | In general your sentence structure was good (A12) | 53.0 | 82.5 | 29.5 |
Language | Word choice | In general your word choice was formal (A14) | 13.6 | 37.9 | 24.3 |
Language | Spelling | Your essay had no spelling mistakes (A17) | 27.3 | 52.4 | 25.1 |
Language | Grammar | Your essay had good grammar (A20) | 33.3 | 83.5 | 50.2 |
Paragraphing | One idea | Your paragraphs had single clear main ideas (A22) | 60.9 | 84.3 | 23.4 |
Paragraphing | Clear topic sentence | Most of your paragraphs had a clear topic sentence (A24) | 48.5 | 70.9 | 22.4 |
Paragraphing | Logical linking of words | You used linking words or phrases effectively to guide the reader to follow your train of thought (A26) | 30.3 | 67.0 | 36.7 |
Paragraphing | Logical linking of ideas | The main ideas linked logically and obviously (A28) | 45.5 | 87.4 | 41.9 |
Macro structure | |||||
Introduction | Thesis statement | Your thesis statement is clear, indicating the position that you will be taking (A32) | 19.7 | 66.0 | 46.3 |
Introduction | Overview | It is clear what main parts your discussion will follow (A34) | 48.5 | 83.5 | 35.0 |
Introduction | Preview of conclusion | It is clear what your conclusion to the problem is (A36) | 24.2 | 66.0 | 41.8 |
Specific position | Reason for specific policy | The reasons why you took the position is explained (A52) | 31.8 | 58.3 | 26.5 |
Specific position | Superiority of your position explained | The superiority of your position is explained showing why the alternatives should not be used (A56) | 16.7 | 51.5 | 34.8 |
Conclusion | Summary | The conclusion in your essay had a clear recapping or summary of the main parts of the argument (A60) | 33.3 | 72.8 | 39.5 |
Resources | Sources | Your use of sources was good (A66) | 45.5 | 73.8 | 28.3 |
Resources | Referencing style | You used one style consistently and correctly (A69) | 60.6 | 84.5 | 23.9 |
Resources | In-text referencing correctness | In your text, you used the referencing style correctly (A72) | 21.5 | 70.7 | 49.2 |
Resources | Referencing of facts | In your use of facts you have a good mix of sources (A79) | 43.9 | 68.9 | 25.0 |
Resources | Bibliography – matching sources | Your bibliography corresponds correctly with the in-text references (A81) | 27.3 | 80.6 | 53.3 |
Resources | Bibliography correctness | Your bibliography was correct (A83) | 1.5 | 59.2 | 57.7 |
For all of the statements, without exception, the students’ peer review was much more positive than the evaluation of the consultants. This is an overwhelming indication of students’ over estimation of their peer’s performance.
A graphical representation (a and b) of the evaluation between writing consultants and students (
A closer look at Panel B indicates that some of the largest differences are observed in the later statements (81–83) relating to the handling of resources. Students have a much more skewed perception of their abilities and accuracy in handling sources. Statements dealing with paragraphing received scores closest to each other (discussed later in the article).
Comparing average scores between writing consultants and students.
Category | Consultants |
Students |
Differ |
||||||
---|---|---|---|---|---|---|---|---|---|
Score | Rank 5 | Rank All | Score | Rank 5 | Rank All | Score | Rank 5 | Rank All | |
Configuration | 30.30 | 3 | 5 | 52.40 | 5 | 7 | 22.10 | 5 | 7 |
Language | 29.68 | 4 | 6 | 63.68 | 4 | 5 | 34.00 | 3 | 4 |
Paragraphing | 46.30 | 1 | 1 | 77.40 | 1 | 1 | 31.10 | 4 | 5 |
Macrostructure | 29.03 | 5 | - | 66.35 | 3 | - | 37.32 | 2 | - |
Introduction | 30.80 | - | 4 | 71.83 | - | 4 | 41.03 | - | 1 |
Specific position | 24.25 | - | 7 | 54.90 | - | 6 | 30.65 | - | 6 |
Conclusion | 33.30 | - | 3 | 72.80 | - | 3 | 39.50 | - | 3 |
Resources | 33.38 | 2 | 2 | 72.95 | 2 | 2 | 39.57 | 1 | 2 |
Apart from the evident overoptimistic view of their peers’ writing ability,
Data set B is also analysed in two ways. The first, easiest analysis is simply to compare the accuracy of the students’ peer evaluation with all evaluators’ responses.
Six student documents were assessed by four different evaluators. These evaluators comprised fellow students (peer evaluation) (S), the subject specialist (the Economics lecturer) (SS), a writing specialist (first-year academic writing lecturer) (WS), and a Writing Centre consultant (C). It must be noted, however, that the fellow student evaluator was not the same person for all six documents. This is also the case with the Writing Centre consultant. The documents were firstly assessed based on the number of times all evaluators agreed on the given criteria for each topic (as shown in
The count of cases where there is an agreement on each topic in all six student documents among the evaluators.
The second analysis is an attempt at finding agreement or disagreement in evaluations based on the understanding of the topics, between two ‘camps’ – the student and subject expert in the one camp and the Writing Centre consultant and Academic Literacy lecturer on the other hand. The student’s own evaluation is compared to the evaluation of the three other evaluators. This kind of analysis will indicate where the student and subject specialist understand the definitions or concepts of writing elements differently from the writing experts. This illuminates areas of shortcoming where both the subject specialist and the student act in the area of unknown-unknowns and subsequently should be assisted by agents trained in writing, in order to attain exit level graduate attributes, but it could also indicate areas where the writing experts may need to ascertain additional clarity from subject experts in an activity like this.
While the above discussion is drawn from a comparison between the evaluation of the students and Writing Centre consultants, the last part of the analysis compares the evaluations of the students, academics of the Writing Centre and the Economics lecturer. As discussed earlier, six specific essays were chosen and the comparison includes all the statements and aspects covered in the evaluation form and not only the 22 mentioned in the analysis method section.
Comparing the responses of the academic literacy centre (as language/writing specialist), the Economics lecturer (as subject specialist) and the students, confirms the earlier observation that students judge their fellow students’ work to be of a higher standard than the subject specialists.
Out of all the responses (each student essay evaluated on 35 statements by three evaluators) in 6% of the evaluations only one of the evaluators had a positive view on the specific aspect of the essay. In all these cases, the positive view came from the student evaluation. In other words, in 6% of the evaluations both the subject and language specialists felt that the specific standard was not met – while the student felt that the standard was met. There was no instance where one of the specialists was the only one to have a positive view.
Delving deeper into the instances in which the students provided the only positive response – ignoring the one statement on the configuration (quality of the title) – resulted in more interesting observations. From the essays written by low performers, the only positive response came from the peers in 4.4% of the questions. This increases to 8.1% for the medium performers and 6.3% for the best performers. The students, therefore, tend to be more positive in their evaluation of medium and top performers compared to low performers; that is, they are able to distinguish between poor and better work. Concentrating on the different categories of text quality, the students were more positive in their evaluation than the academics regarding the use of sources (8.3% of the questions) and paragraphing (8.3%). Language quality and macrostructure came in at lower levels of 6.3% and 4.6%.
In
In a typical educational situation like this, the assumption could be that the economics lecturer is the subject specialist or content expert, with some experience in writing. The student on the other hand, is a novice in writing, with limited subject knowledge, albeit more subject knowledge than the writing specialist or the writing consultant.
These assumptions are not born out in the data (
The number of cases in which there is an agreement on each topic in all six student documents between the student (S) and the subject specialist (SS) (blue) where the number of all evaluators are in agreement (orange) plotted against the topics that were evaluated in the eight different categories.
In blocks B and C, it is clear that the subject specialists have the same understanding of issues of grammar, spelling and punctuation, but this understanding is not shared by the writing experts.
The next area is the concept of what entails good paragraphing. In paragraphing, a good paragraph has one main idea, and this idea is presented with coherently linked sentences. While both the subject specialist and student had similar evaluations regarding their perceptions of these two paragraph elements, the writing specialists differed in opinion.
In
Types of writing per topic where all evaluators had the most disagreements.
Topic | Type of writing | Who disagreed |
---|---|---|
I: Background | All | None |
I: Preview of conclusion | All |
None |
I: Thesis statement | All |
All |
L: Spelling | All | WS+C |
P: Clear topic sentence | All | WS+C |
P: One idea | All | WS+C |
S: Side effects | All | All |
T: Title | All | All |
C: Absence of new ideas | Bad | None |
C: Final answer | Bad | WS+C |
L: Grammar | Bad | WS+C |
L: Punctuation | Bad | WS+C |
L: Sentence length | Bad | WS+C |
P: Logical linking of ideas | Bad | WS+C |
R: In-text referencing correctness | Bad | WS+C |
R: Referencing style | Bad | All |
R: Sources | Bad | WS+C |
S: Reason for specific policy | Bad | WS+C |
B: Potential impact | Average | None |
B: QE in the Eurozone | Average | None |
B: QE’s impact on the US economy | Average | None |
I: Overview | Average | None |
L: Word choice | Average | WS+C |
R: Quotes sources | Average | All |
R: Referencing of facts | Average | WS+C |
S: Contrast with alternatives | Average | WS+C |
S: Specific policy | Average | WS+C |
B: Explaining qualitative easing | Good | None |
B: QE’s impact on the Eurozone economy | Good | WS+C |
C: Summary | Good | None |
L: Sentence structure | Good | WS+C |
P: Logical linking of words | Good | WS+C |
R: Bibliography - matching sources | Good | WS+C |
S: Superiority of your position explained | Good | None |
B: QE in the US | None | None |
QE, quantitative easing.
, This is based on the analysis shown in
, Only student not in agreement with the rest.
It is, therefore, of interest to note which topics group together and why. In the case of the topics that are listed as ‘bad’ writing, we see almost all the disagreements are between the two ‘camps’: the writing specialists, and the students and the subject specialist. This trend indicates that the students and the subject specialist have the same idea or understanding of those topics and deem that in the ‘bad’ writing these topics are correct.
Furthermore, the topics listed for average writing illustrate the topics in which the evaluators have some knowledge but not sufficient to confidently analyse the correctness. It is interesting to note that in average to good writing all evaluators agreed on the topics in which basic level subject content could easily be identified. The disagreements in good writing came from the topics in which evaluators venture into the ‘unknown-unknown’ field, each determined by their expertise and perceived understanding of the others’ discipline.
Thus, it seems that when elements of language, like grammar, spelling and punctuation, are to be evaluated, external (language expert) assistance will be required by both students and subject specialists alike. This is not surprising, since most universities at present require their post-graduate texts to be proofread by professional language editors and many journals nowadays refuse to accept articles without proof of professional proofreading as well.
As far as effective communication (the graduate attribute mentioned earlier) is concerned, it is vital to obtain a logical flow and focus in written texts. The data illustrates that assistance from writing pedagogues may be needed to assist students in economics to refine their writing in order to meet the graduate attributes expected of them, that is, formulate, present and communicate ideas and arguments effectively.
The overall outcome of the study confirms that students’ peer reviews of their honours essays tend to be much more positive, compared to that of the subject lecturer and writing specialists: in line with the Dunning-Kruger effect. On the positive side, however, despite the more generous evaluations, the students did display the ability to distinguish between varying skills levels, that is, good and bad writing. They ranked the elements of quality writing in the same order as the specialists and were more generous towards good writing compared to bad writing. Based on the evaluations from all the role players, paragraphing (a required characteristic of good writing) received the highest marks. It is thus comforting to know that this specific group of students, although in the first half of their honours academic year, are deemed to be able to build an argument and write coherently – the building blocks of economic rhetoric. On the negative side, they were not deemed to be eloquent in the use of sources and, being second-language English writers, certain issues regarding language use were highlighted by all the role players – including the students themselves.
Non-surprising findings were that students were able to evaluate some easier aspects of writing quite accurately, whereas the areas of writing skill illustrating greater finesse, were evaluated less accurately. Where the findings become troublesome as far as pedagogy is concerned, is when both the economics student and lecturer differed in their analyses from the analyses done by the analysers trained in writing and language, illustrating discrepancies in the experience or definition of what constitutes effective writing. A second area of contention is where only the subject lecturer disagreed in evaluation from the other three parties, illustrating either substandard subject knowledge and application from the students, or possible unclear expectations regarding academic requirements on the side of the lecturer.
Implications of this for pedagogy are quite simply that more conscious effort needs to be put into teaching economics students the importance and value of effective writing. Furthermore, the skills, requirements and qualities (definitions) of what is considered to be effective writing need to be spelt out and illustrated in more elaborate detail. Economics lecturers, and probably the syllabus, need to put more emphasis on the correct use of sources. This includes paraphrasing (not merely copying of other’s ideas), as well as referencing (both in text and the compilation of a reference list). This, in combination with clear academic expectations regarding subject knowledge and academic application to content and context, may assist in raising the awareness of students about what it really takes to become an effective communicator on matters relating to their chosen profession.
An improvement on this study would entail a few more pre-exercise steps, such as a pre-test of students’ theoretical knowledge of concepts and an investigation into how students acted upon the information they received about the mismatches between their self-evaluations and those of others. A self-evaluation on the same form as the peer evaluation would also provide additional quantifiable data.
The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.
J.d.T. was involved in the conceptualisation, data collection, data analysis and writing of the article. A.P. was involved in the conceptualisation, data analysis, writing of the article and critical reading. H.L. was involved in the conceptualisation, writing of the article and critical reading. M.G. was involved in the conceptualisation, project administration and writing of the article.
The data were collected anonymously and all participants provided informed consent.
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Data that support the findings of this study are available from the corresponding author, A.P., upon reasonable request.
The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.