I'm just dropping in to respond to your point 2. Publons (now part of Web of Science) allows you to list peer reviews you can completed and, optionally, post your review. Some journals now automatically will report to Publons if you have a profile. It would be great to normalize using something like this in our field.
Original Message:
Sent: 07-26-2024 03:28
From: Xiaocheng Li
Subject: Is our publication model an existential threat to our field?
Thanks for raising the topic and for the insightful comments. As a junior researcher in our field, I also agree with these points.
I just want to share two thoughts that I had when I review the papers (sorry if it's a bit detour from the main theme of the discussions here):
- Can we introduce a confidence score system to indicate the confidence of a submitted review? Generally, as reviewers, we may have different confidence levels for different papers we review, depending on the familiarity with the topic and the time spent on the review.
- Can we make all the reviews public in an anonymous manner (which may or may not depend on the willingness of the reviewers)? That's how it works for many CS conferences on the platform such as Openreview (https://openreview.net/). The critiques in the reviews can provide a guidance for readers, and in particular, for graduate students and junior researchers in our community to get a better idea of how to produce good research and write good papers. One step further, this type of Openreview platform also allows the general audience (who are not the authors or the reviewers of the paper) to comment on a paper anonymously; and this could facilitate discussions of a paper and exchange of ideas.
This might not be an easy change, even from the technical aspect of the submission system, but I'd be happy to see them in the near future.
------------
Xiaocheng Li
Assistant professor of Analytics and Operations
Imperial college business school, Imperial College London
https://xiaocheng-li.github.io/
Original Message:
Sent: 07-25-2024 12:17
From: Bruce Hartman
Subject: Is our publication model an existential threat to our field?
I agree with Barry, and with Shane. One pet peeve of mine is the endless literature reviews in articles today. Just because you looked it up is no reason to put it in the paper. Only the articles directly relevant to your subject should be included and summarized. That might be three or four or five, unless the submitted article has very wide scope.
I also agree about endless proofs, some of them trivial, in appendices. Refining your proofs may take longer in writing, but when the reader has to go to the appendix for substantiation, it wrecks the reading flow.
And Shane, thanks for providing the link to Chris Tang's article. Limiting what reviewers say to main points is very helpful. I've read too many 'response to reviewers' letters from authors, detailing every small point authors felt they had to respond to.
It's also helpful if reviewers think hard about the subject and don't make comments that are just plain wrong, but need to be rebutted. More openness to new ideas would also be useful, as we run the risk of ever refining a few ideas that were originally good, rather than publishing genuinely new ideas. My advisor Moshe Dror, God rest his soul, frequently railed about 'epsilon' papers that contributed just that much to a topic.
------------------------------
Bruce Hartman
Professor
University of St. Francis
Tucson, AZ United States
bruce@ahartman.net
website:http://drbrucehartman.net/brucewebsite/
blog:http://supplychainandlogistics.org
Original Message:
Sent: 07-24-2024 09:58
From: Barry Nelson
Subject: Is our publication model an existential threat to our field?
At the INFORMS Simulation Society Workshop in June I participated in a panel discussion that included the topic of the future of publication in our field. I took an extreme position and argued the our current publication model is threatening the existence of our field.
I started by noting two highly influential papers that appeared in Operations Research when I was a graduate student: Schmeiser (1982), "Batch Size Effects on the Analysis of Simulation Output," and Schruben (1983), "Confidence Interval Estimation Using Standardized Time Series." The Schmeiser paper was 14 pages long (in the then small trim size), contained neither theorems nor experiment results, and completely changed how we thought about a longstanding problem. The Schruben paper was 19 pages long, contained all proofs within the paper, and invented a new methodology that is still used today.
Why were these papers published? Because they contained great ideas and just enough analysis to establish that. What would the review process do to such papers today? It would drown those ideas in a soup of "extras" so that only the most diligent could find them.
Of course the maturity of our field in 2024 is different than 1983, but I believe we are in danger of having our publications become irrelevant to anyone not needing them for tenure. I argued for five objectives:
We should get back to the supremacy of the idea.
We should support and believe in editing.
We should write papers that are capable of being refereed.
We need our papers to be correct & concise to gain traction outside our field.
We should only demand the lit review, analysis and experiments needed to establish the value of the idea.
Ok, let the arguments begin :-)
------------------------------
Barry Nelson
Walter P. Murphy Professor
Northwestern University
Evanston IL
------------------------------