Top-flight German business prof faces severe accusations of academic misconduct

One of the most successful German business professors is currently facing awkward questions about his scientific conduct. Ulrich Lichtenthaler, who is affiliated with the University of Mannheim, has come under suspicion of inflating his publication record using unethical methods.

Additionally, a number of his published papers apparently contain severe mathematical errors and methodological inconsistencies.

In the last couple of weeks, two academic journals – “Research Policy” and “Strategic Organization” – officially retracted three papers by Lichtenthaler. “This is only the tip of the iceberg”, asserted a researcher familiar with the investigations speaking to me on the condition of anonymity. “There is much more to come.”

Several people who looked into the matter are convinced that the whole affair has the potential to turn into a major academic scandal. One academic told me that “Industrial and Corporate Change” and the “Strategic Management Journal” are also preparing retractions.

Ulrich Lichtenthaler himself acknowledges failures in some of his work. In a written statement his office send to me on his behalf, he stated that he himself informed the administration of the University of Mannheim “weeks ago” about “unintentional errors” in his work. “He wants to stress that the mistakes happened unconsciously. Mr. Lichtenthaler himself takes a great interest into elucidating all aspects quickly.” Referring to ongoing investigations by his university, he refused to answer any specific questions, though.

“The boy who gets everything right”

The thirty-three-year old used to be the undisputed shooting star of business science in Germany and has an incredible publication record. Since 2004, Lichtenthaler has published more papers in international renowned journals than almost any other German business professor. The database used for the Handelsblatt research ranking lists a total of 50 publications. 21 of them were published in 2007 and 2008. Among other journals, Lichtenthaler published in leading outlets like the “Academy of Management Journal”, “Organizational Science” and the “Journal of Product Innovation Management”.

Given his amazing productivity, in 2009 the German association of business professors (Verband der Hochschullehrer für Betriebswirtschaftslehre) awarded a prestigious prize for young researchers to him. In the same year, he topped the Handelsblatt list of the most productive business researchers below 40 albeit he was one of the youngest academics in the list. Last year, he was hired as a full professor by the University of Mannheim which is – according to numerous rankings – the leading business department in Germany.

In 2009, we published a portrait about Lichtenthaler’s amazing career in Handelsblatt. With the benefit of hindsight, the headline seems to be ironic: “The boy who gets everything right”. He told my colleague Anja Müller that despite his striking research output, he wasn’t considering himself a workaholic and that “academic enthusiasm” keeps him going. He claimed that he wasn’t working more than 40 hours per week and did not do any academic work on Sundays. “I just started to publish early and was dissed occasionally when the submitted articles did not live up to the expectations of the referees.”

Too productive by half?

About half a year ago, a small group of academics privately started to question this academic flight of fancy. An alert academic who was refereeing a paper submitted by Lichtenthaler had a hunch that he had seen the same paper by the same author before but in a different journal. A cross reference to that work was missing, though.

Publishing similar or closely related paper twice without cross referencing is considered unethical in academia, partly because it unnecessarily consumes the unpaid work of other researchers who act as referees for the journal. Redundant publications – also called self-plagiarism – clog up the work flow of the editorial process and eat up precious space in academic journals.

Last year, the Swiss economist Bruno Frey was publicly rebuked by a several of economic journals because he published similar papers without cross-referencing them. Eventually, the University of Zurich forced Frey into retirement because of his misconduct.

This sceptic referee teamed up with a small group of academic friends. They started to sift through big parts of Lichtenthalers published work. As I’m told, the results were devastating. Allegedly, they discovered rampant self-plagiarism as well as much more severe misconduct.

The self-appointed investigators apparently collected tons of evidence and sent it over to the affected journals.

The accusations in detail

One of the first journals to react was “Research Policy”, an Elsevier journal specialised in “the interaction between innovation, technology or research, on the one hand, and economic, social, political and organizational processes”. “Research Policy” has a history of making no casualties when cracking down on scientific misconduct. It’s editors were in the forefront of uncovering the serial plagiarit Hans Werner Gottinger (here’s a summary of the Gottinger affair).

This week, „Research Policy“ retracted two papers published by Lichtenthaler in 2009 und 2010. The editorial board of the journal also published an extensive and devastating explanation on its website. (The excellent blog Retraction Watch was the first one to report about this.)

According to this document, there were three major issues related to the papers

1) Self-plagiarism

In both cases, “the author failed to disclose … the existence of other closely related papers by the same author”, the document says and accuses him of hoodwinking the journal: “the referees and editors … were misled as to the level of originality of each Research Policy paper … If they had been aware of those parallel papers, they would almost certainly have concluded that each of the two papers in question did not represent a sufficiently substantial and original contribution to knowledge in its own right to merit publication in a leading journal like Research Policy.”

 2) Dodgy empirical design

In both cases, “Research Policy” accuses Lichtenthaler of having been “been inconsistent in his treatment of the variables”. They give two concrete examples: “variables treated as important in the 2009 Research Policy paper are disregarded in another parallel paper (…) and vice versa.”

In the other case, the analysis of three sister publications apparently reveals “an omitted variable bias problem that would invalidate the conclusions of the Research Policy 2010 paper”.

The editors come to an explosive conclusion: “In both cases, this raises severe doubts as to the validity and robustness of the conclusions drawn in the two Research Policy papers (and indeed in the other parallel papers).” They explicitly refer to four additional papers by Lichtenthaler that were published in “R&D Management”, the “Journal of Product Innovation Management”, “Strategic Organization” and in “Organization Science”.

One of the people familiar with the details of the cases privately told me that Lichtenthalers conduct looks like he was trying to disguise his concurrent publications.

Faulty statistics

In one of the two papers retracted by “Research Poliy”, non significant results were erroneously labeled as significant. The editors stress that this problem was highlighted by Lichtenthaler himself: “the author wrote to acknowledge a third problem with the Research Policy 2009 paper, namely that the statistical significance of several of the findings had been misreported or exaggerated”.

Lichtenthaler asked to withdraw the paper but the editors declined this wish: “by then the editorial decision to retract that paper on the original two grounds listed above had already been taken”.

Wrong information regarding the statistical significance of results are the reason for the third retraction of the third paper by Lichtenthaler, that was published in „Strategic Organization“. Embarrassingly, that was a paper Lichtenthaler jointly wrote with his PhD advisor, Holger Ernst at the private WHU – Otto Beisheim School of Management in Vallendar. Russ Coff, the editor of “Strategic Organization”, confirmed that Lichtenthaler approached the journal, took full responsibility for the errors and asked to retract the paper. I asked him why they decided against printing just a correction of the faulty results. “The tables could have been corrected but the core findings were invalid so retracting the article made more sense”, Russ Coff answered.

He also gave me a more detailed description of the statistical issues:

“Most statistical tests examine our confidence that a regression coefficient is significantly different from zero. In its simplest form, such a test is calculated based on the ratio of a coefficient estimate and its standard deviation.

Even a large estimated coefficient would not be significant if the standard deviation is also very large. Generally, the lowest ratio to be statistically significant would be about 1.65 (though there are other factors that determine this). A high level of significance would require a ratio closer to 2.5. Thus, one can look at any table and expect significant coefficients to be around twice the value of their standard deviation (or more).

In the paper, you will find that, in some cases where the ratio of the coefficient to its standard deviation are low (well under 1.6), the coefficient is marked as being statistically significant (typically marked with a * or†).

These are the errors in reporting and they are easily spotted if you know what you are looking for. These errors are especially an issue for hypothesized effects as opposed to control variables.”

Reactions of the universities, so far

In a comment Russ Coff left on the blog StrategyProfs.net, he also pointed out that similar issues seem to be rampant in Lichtenthalers work:

“it appears to be part of a pattern across a number of articles published in a variety of well-respected journals.”

Lichtenthaler’s employer, the University of Mannheim, told me that learned about the issues from Lichtenthaler on 8 June. Three days later, they received a letter from “Research Policy”.

They decided to start a formal investigation. However, the commission that is going to look at the case, has not started its work yet. Additionally, it is not yet clear if they will ask independent external referees to look at the matter or if they will just deal with the issues internally. Last year, when the accusations against Bruno Frey emerged, the University of Zurich immediately started an external audit of Frey’s conduct. (However, the remit of the referees was ridiculously narrow.)

The WHU – Otto Beisheim School of Management in Vallendar, where Lichtenthaler made his PhD habilitated and worked until last year, acted more swiftly. The university commissioned three external academics to investigate Lichtenthalers scientific conduct. “We will thoroughly illuminate the whole issue”, the dean Michael Frenkel told me. Frenkel pledged that there will be “full transparency”. He admitted that he was shocked by the whole event:

“If the accusations turn out to be true, this is completely inacceptable.”

In 2009, Lichtenthaler told my colleague Anja Müller that up to that point, his academic career sometimes was rocky but generally accelerating. Now, his fortunes might have changed.

37 Comments

Filed under General Economics, Uncategorized

37 Responses to Top-flight German business prof faces severe accusations of academic misconduct

  1. Pingback: ulrich lichtenthaler « orgtheory.net

  2. It all comes from this obsession with publication records and rankings. Unfortunately, your blog cannot be excluded from this criticism. You have also in the past measured the academic credentials of economists by looking up the number of publications in a database, instead of reading some of them and telling us what you like about them and what you don’t.

    • Andreas, we run two pages a week dedicated to current economics research at Handelsblatt (see here) On average, we run four articles dealing with current papers and research. Some of the stuff is also available in English on this blog. Believe it or but, but occasionally this job even requires reading the articles we write about.
      Please also note that the bulk of the contentious articles published by Lichtenthaler and Frey (and the entire work by Gottinger) were published before the Handelsblatt research rankings were introduced.
      We’re currently intensively discussing how to deal with Lichtenthaler in our next business ranking, which will be published in September by the way.

      • B. Göpel

        I completely agree with Andreas Moser: Your counter-arguments are bullshit but unfortunately true: you first establish “stars” by reporting about extraordinary people (see your articles about Sinn, Frey, Lichtenthaler before the scandals) and later on destroy these stars. Very clever; no doubts. Negative news attracts five times more readers; even if the hype were created artificially. The Handelsblatt should return to respectable journalism and it should stop the rise of scientific misbehavior due to its rankings.

      • M.W.

        It makes absolutely no sense to blame the Handelsblatt ranking for misconduct of individual researchers. The HB ranking is the most serious ranking available and it creates much more transparency than we had in the past. I am looking forward to reading the new edition of the BWL ranking in September. Thanks for the great work!

        I have two questions:
        1) Do you already know when the next update of the VWL ranking will be published?
        2) There is only one major complaint about the VWL ranking among many top researchers: In economics, the A+ category should contain the top-5 journals only. At the moment, if we take the Lebenswerk ranking and click on “Punkte A+”, the result is almost meaningless, because there is a strong finance/experimental economics bias in the current A+ category. Of course, when clicking on “Punkte A und A+”, we get a pretty realistic picture about the best German economists, but still the ranking would be significantly improved if category A+ would contain AER, Econometrica, QJE, JPE, and REStud only.

    • “It all comes from this obsession with publication records and rankings.” … yes, maybe! Athletes dope because they want to win (medals/prices/etc.). Does this imply that you want sport events without competition and winners? I hope not. I think we should not blame the metric to measure performance if someone cheats.

      • Athletics is about quantity (meters run or swum, goals scored, kg lifted), not quality. Science, including economics, should be about quality.

    • “Science, including economics, should be about quality.”

      Dear Andreas,

      Do you think that the research quality does not relate at all with the number of citations a paper (or the work of an academic in general) has?

      • No. I think the number of citations shows how often something has been cited, nothing more, certainly it is not a measure of quality. Take movies: The movie that sells the most tickets is the highest-grossing movie, it is not necessarily the “best” movie (in whose ever opinion). And citations cost even less than a movie ticket. I may want to cite an article as a bad example, to criticize it, to make fun of it or just to drop names or to expand the number of footnotes.

        • There is however a big difference between the movies and the scientific papers. The target audience of a movie is the general public while the target audience of a research paper is the community of experts. Experts generally possess the skills to filter information of low credibility/quality. Furthermore a wrong citation can definitely cost to an academic more that a movie ticket. Make an x claim based on a garbage paper the community of experts on the field has refuted and you’ll never see your paper being published. Furthermore nobody would make the effort to criticize a paper who doesn’t have serious enough arguments (and if he does his work will not be published – hence no citations would appear).

          Exceptions surely exist. But the above refer to what happens *in general*. Hence the strong correlation between the research quality and the number of citations (see the number of citations of the work of the physicist A.Einstein, the philosopher K.Popper or the economists R.Selten and C.Granger – their work is undoubtedly of the highest quality )

      • Excuse my typing errors…

  3. Pingback: Handelsblatt.com - Schwere Vorwürfe gegen Mannheimer BWL-Professor Ulrich Lichtenthaler « Handelsblog

  4. Veit Böckers

    Thx for making it public. Once again, science and journalism save the day (be it that one may prove the other wrong on that occasion)

  5. …how many victims of ranking mania in academia will follow?

  6. The big question is why did it take so long to find this out? Probably, we have too many (irrelevant) journals which like to be clogged with articles that pretend to be rigid. But are they relevant to business?

    I doubt that because otherwise readers would have seen the redundancies and duplications earlier. It seems that very few read all the journals, even the “relevant” ones.

    The problem in business and management science is much broader. We have lost the relevance. That is also true to economics. Ceteris paris, all models work but none helped to see the global finance crisis. But they are all so statistically proven. Great, but what is the use of it?

    Hopefully, Prof. Lichtenthaler can keep his position because reality works totally different. Profs usually don’t make good managers or entrepreneurs with a few exemptions. Out in the wild, relevance and decision under high uncertain count, not statistics.

    And please, do not go to banking because there, we have seen too many models as well that looked great to take even more risks but proved fatally wrong when the assumptions did not prove to be part of reality but just statistics. Unfortunately, we as a society have to carry the losses, while the profits were privatized. At least, we have a model that explains this: Principle-Agent problem.

    This sad case should be a wakeup call for academics.

    • Bruce

      Why on earth do you hope he can keep his position? If he is found guilty, which from what I know looks highly likely, he should be fired. And he should not expect to get any other position of trust in the future. Academia and many other walks of life require people to behave ethically. There is considerable evidence that he has not. Like Icarus, he has flown too close to the sun. The sad thing about this is that an intelligent young man’s career is toast.

  7. Mprulez

    ..and whats the future of his 10+ PhD students? What about papers written by them?

  8. If the standard for retracting articles really is “severe doubts about the validity and robustness of [empirical] conclusions,” then SMJ, OS, AMJ, etc., are going to be very busy bees in the months to come. I can imagine them having to retract entire issues full of papers suffering from dodgy empirical design and overstated conclusions.

  9. Anonymous

    @Patrick-Stähler, I sure hope he loses his job as soon as possible and I trust Uni Mannheim to do the only acceptable thing. He not only self-plagiarized, but actually manipulated results. This is absolutely unacceptable and not comparable to making a wrong decision in a company.

  10. I am investigating self-plagiarism as an issue in research ethics, so I’ve been reading your exposes of Frey & co. The more I read, the more I become convinced that too much is being made of self-plagiarism, so much so that the term itself becomes suspect.

    First, plagiarism (without the “self-“) is essentially about appropriating (“kidnapping”) something to which one has no rights, and this is precisely what self-plagiarism (SP) is not. It seems to me the latter term grossly exaggerates the nature of the misconduct (it’s somewhat like assisted suicide opponents calling suicide “self-murder”). At the least, it perpetrates a category mistake.

    Secondly, one should distinguish between the question of failure to disclose previous publication – which is a breach of the contract between the publisher and the published author – and the matter of publishing one’s ideas in similar form in various publications, even various academic publications, without cross-referencing. When academic journals impose no rule about disclosure of previous publication / submission, I find nothing wrong with “SP” in itself (i.e., provided one does not use it to, for example, game the academic assessment system).

    The notion that such disclosure is essential to academic integrity has become a rule of sorts among academic publishers and in the academic community, but the moral basis for it is, in my view, not very strong.

    One frequently invoked reason is that a lot of editing and reviewing effort is saved by the rule in question. Perhaps, but so what? Why is saving the editing and reviewing effort of academics morally binding on authors? Are frequent misspelling, long articles, complex papers, persistent resubmission to dozens of journals, publishing a lot rather than condensing one’s findings into a few excellent pieces, engaging the reviewers with rebuttals, etc. also unethical?

    Then there is the notion that multiple-venue publication (cross-referenced or not) eats up a lot of scarce space that could have been filled with other papers. The question of scarcity is itself quite dubious (in this day and age?!). But even if publication space were scarce, surely the whole point of academic publishing is to have the best articles – and not the largest number of articles – in print (or electronic “print”). Publishing in multiple venues is a good proof that one’s findings are of high quality, having been vetted by multiple sets of editors and reviewers. Or that they are of interest to a broad base of academic readers. (And this, incidentally, suggests that there are benefits to multiple-venue publication that may compensate for the additional editing and reviewing effort…)

    This throws some light on another, distinct issue, that of how an academic’s publication record should be quantified. True, it would be unethical for an academic to report a paper published in 3 venues as 3 distinct papers, simply because that would be gaming the current academic evaluation system. Still, this does not make “SP” unethical, just using it for particular purposes. But perhaps the academic assessment system ought to take into account multiple-venue publication, such that a paper published in 3 journals should, all things being equal, count more than 1 paper published in 1 journal.

    So yes, “SP” may be put to bad use (gaming evaluation), and it may be perpetrated under circumstances that make it unethical (breach of publication contract). But “SP”, in itself, is not, I have come to believe, unethical.

  11. Anthanayan Lohari

    The tragic issue with this story is that Lichtenthaler is an extremely talented researcher. Even if he self-plagiarized two in three of his papers, he still has a very impressive publication directory. I hope his career does not find an early end.

  12. Pingback: Journal Cheat Found Out: But the “Journal System” is the Bigger Cheat » Critical Faculties

  13. researcher

    Lichtenthaler has done salami-slicing and not published the same paper twice, and salami publishing is very usual. here just some comments based on my knowledge of German business research. you will find similar things.

    *** list of names with alleged self-plagiators removed by Olaf – please do not use this blog to anonymously accuse researchers without giving any solid evidence ***

    I have not digged deep into these researchers pubs. it is simply some knowhow of german business research.
    all of these researchers are way more senior than Lichtenthaler.
    How can you talk about a single (junior) guy if half of the ‘top’ community (of more senior researchers) in Germany has done similar things? if you want to punish Lichtenhtaler, you will have to punish many, many ‘top’ researchers at virtually all business universities in Germany.

  14. ResearcherHunter

    @Researcher – this is the most disrespectful comment I have ever read in a forum like this. You are accusing scholars of scientific misconduct without any substance. You have not even read the articles of people like von Krogh – or you simply do not have the necessary intellect to fully understand the articles you refer to. Take for example, the Diamantopoulos article of 2001 and tell me how this can be the same material as in the 2006 paper. Shame on you!

  15. ResearcherHunter

    @Davey – I am not von Krogh, and for sure I did not publish this reply anywhere else.

  16. Pingback: Quality and quantity in publishing | Ingo Rohlfing

  17. Self-plagiarism is bad and deserves punishment. But it is the tip of the iceberg of rot that infests our best journals. For the past 50 years most of our research has followed a fatally flawed soft-social-science model that incorrectly attributes all manner of meaning to statistical significance, does no replications of studies, cannot be cumulated, and obviously is so murky and unfocused that self-plagiarism can easily get by the venerated (incorrectly, also) peer-review process. If this sounds like an over-the-top condemnation of economic and business research, go to my weblog, Management Junk Science (http://sites.udel.edu/mjs), and read the literature for yourselves. All that we bogus “scientists” have created is a system that rewards beancount, and Lichtenthaler gamed that system beautifully. Look at the list of journals he suckered.

  18. Jessica

    Well, it seems that there are serious ethical problems within many top business schools. They just want to see output and don’t care about the way it is produced. Concerning the Lichtenthaler case it is very intersting to listen to the following video of one WHU professor… http://www.youtube.com/watch?v=qQtE_-ziXuo

  19. FPT

    There is an element missing in the discussion of the article:
    Academic studies are building on other academic studies. After his came to light, I had to pull my student (and myself, obviously) off a study/data collection where we used a Lichtenthaler paper as the key source. The paper has not been retracted yet—but we are not taking the risk. In fact, we are not going to trust any Lichtenthaler sources now, giving the above. We have to go back to theory/literature review—we wasted a lot of time and effort. A similar case was shared on the AIS mailing list, the author calling it a case of “collateral damage”. A study by several strategy professors was rejected in second round review at a top journal (there are usually several lengthy rounds of reviews and revisions before a study gets published in an academic journal). The rejection reason was that the study uses Ulrich Lichtenthaler’s “desorptive capacity” concept from one of the in-between-retracted papers. Their study can go to the trash now, I guess. Again, months of work wasted. Probably, there are more cases like this out there and will surface over the next months. Lichtenthaler has not only damaged his own career and probably that of his PhD students, but also non-related academics with this misconduct. Also, there is a massive time investment on side of WHU and U Mannheim, at each of the above journals, and the unknown group of people did the original investigation (btw. thank you for your time). Just to remember, reviewing is usually unpaid work and will keep a lot of brains from actually doing research. That whole case should not have happened. That is why we have these ethical standards in place.

    Another point worth mentioning is that above journals have acceptance rates of 10% and lower. Having near-identical papers published means that there must have been even more papers submitted to journals that did not make it to publication. We are probably not talking about “a double submission” but a case of deliberate and repeated case of—as someone above said—”gaming” the double-blind review system by submitting near-identical papers to several journals in parallel. Only Lichtenthaler himself will know about the full extend of this.

    The term “self-plagiarism” is a bit of a tricky issue. Did he actually reuse texts or models as such? Or is it more like Lichtenthaler has created several models (for the same research problem) and than deliberately excluded variables from each model so to have “different” papers—reporting findings that where actually inconsistent/contradictory to the full model? I did not read enough to answer that. Both casese are unethical but in different ways. The text reuse is “self-plagiarism” but does not change the substantive findings. The omitted variables are not “self-plagiarism” yet appear to have lead to actually reporting wrong findings (even the author knew that he was not telling ‘the whole truth’). The later case is worse, IMHO. Especially, there is an element of deliberate action to it, as Lichtenthaler apparently ‘forgot to mention’ the other submissions/publications in the submission cover letters (and the papers).

    The third point, the statistical errors… This is a different issue and I do not think Ulrich Lichtenthaler is to be held responsible alone. These errors should have picked up by reviewers. Yet, I am asking myself, is it really that the author was not pointed at those errors in any of the many reviews he received? Or is it that he ignored those comments and was instead looking for ‘weak’ reviewers at other journals that did not pick up on the errors in the first place? According to the above article, the many papers would have reported negative/non-significant results without the errors. Hence, the papers would not even have been considered for publication without the errors. So, the errors might be actual errors—yet they came in handy to have nice positive results for publication. Likely, the errors where made unintentionally, yet the author discovered them at some point but did not want to pull out the submitted/published papers at other journals where the errors were not spotted.

    Overall—if the reviews at the journals and the universities confirm the above article—Ulrich Lichtenthaler has to give up (or made to give up) his academic
    credentials.

    (No, notifying journals and universities after being made aware of upcoming retractions is too late to count as ‘proactive’. Lichtenthaler told U Mannheim on June 8, after Research Policy sent their letter to U Mannheim on June 6—Lichtenthaler will have know about that letter coming.)

    Btw. I agree that ranking/ratings are questionable indicators of academic quality and are creating a range of issues (but also some benefits)—this needs a separate discussion.

    • Anonymous

      Talking about journals, does anyone really believe anymore in the worthness and true blindness of the academic journal publishing? The names are always the same, whether in journals or conferences, everybody know everybody, authors focus on trying to spot who is the reviewer nr 1 or 2 and try to cite his or her work. Editors ultimately make the decision and if they know you even better because they know who is the author. And then because in management many authors write together there is a number of “smart” young people who focus on who to target and write with and get to publish a lot with little contribution. Unfortunatly that works better for young female. Has anyone noticed how many professors marry their young students who get access to much? There is something fundamentally wrong with the academic publishing concepts and conferences and the way people go upwards and get jobs. Maybe someone should look into that rather than in some isolated effects. It is a lot more rotten than it seems.

    • qed

      FPT — thanks for good comment.

      However, I do not agree that findings can be flagged as statistically significant by mistake. Statistical software packages take care of the flagging, and they do not make mistakes, especially if standard errors do not permit.

      To flag a relationship as statistically significant contrary to the output created by software, you have to manually add the star (*) or plus (+) signs. Thus, we are talking about intentional fraud here and not careless mistakes.

      With reference to other comments here, I do not agree that focus on high-ranking journals and citations as measures of impact is somehow problematic. On the contrary: they ensure high-quality research. You have cheats in any business: doping in sports, manipulation of LIBOR in banks, falsification of results in academia. But, the cheat got caught. The system works even if it were not perfect.

  20. Pingback: Bruno Frey May Have Company « 36 Chambers – The Legendary Journeys: Execution to the max!

  21. are_you_for_real

    Of course, it is the fault of the Handelsblatt ranking and the pressure to publish in decent journals (which are only run by a group of US friends), when people cheat!!!

    Competition in academia its counter productive!! (We) Germans should just do what we did in the past. Publish in German journals were we know all the reviewers, copy international theories with 10 year delay and use weaker methodology.

    Why should we worry about citations as measure for quality or impact. Research which points in this direction is published in journals which are cited! They have to say that!! Additionally, its obvious that international research gets more citations and under no circumstances should we be participating in international research activities!

    I also want to thank “anonymous” who points out that we have a serious problem with female researches who use marriage to get into journals. We have to stop this trend or at least get more homosexual reviewers, so that man have a chance too!!!

  22. Pingback: O criador de dados estatísticos « Portal Tuxauas

  23. science fraud

    It’s unbelievable all the many cases of Frey publishing the same research several times with many coauthors and in dozens of journals are clearly documented at http://freyplag.wikia.com and still only two journals rebuked him, thereby admitted own reviewing failure, and only he and two coauthors were put on the repec list of plagiarists.http://plagiarism.repec.org/offenders.html Others continue supervising PhD students at places like Harvard or Columbia. Repec even refuses to investigate in the other cases as long as they are filed anonymously – and no economists are willing to give their name for it.http://www.econjobrumors.com/topic/copy-of-letter-to-repec-plagiarism-committee-and-editors-of-aer-and-jpe What does that say about our discipline?