169 Comments

Fascinating research and as a fellow academic, hell yeah to everything you had to say about scholarly publishing! Learning to write academic papers in grad school (i.e. how to make anything of the slightest interest boring and incomprehensible) almost destroyed me. Thanks for showing how fun science can be.

Expand full comment

Check out Seeds of Science! (theseedsofscience.org) - trying to make science publishing fun again!

Expand full comment

I’ve always thought that scientific papers needed more things like “That’s replication, baby.😎”

Adam, I really appreciate this. I’m sure it takes some amount of courage as an academic to break the mold like this. Do you think it’s realistic that science could actually be reformed in this direction?… Is it as simple as more people just deciding to write their papers like this? And, what about the downsides? Call me an optimist, but I’m sure there are some good reasons scientific papers are written the way they are. Maybe one is that the more expertise/knowledge you assume of the reader, the easier you can communicate extremely narrow and nuanced ideas that make up the majority of publications, the inch-at-a-time progress in science. That’s not to say that scientists always do this well or scientific writing hasn’t moved beyond this to a different thing. But do you think we will always need hard to read scientific papers?

Expand full comment

I'm an optimist as well, and I think it really is a matter of people choosing to write their papers like this, and we'd all be better for it.

It's worth unpacking what, exactly, makes papers hard to read. I think it's a combination of factors. For instance:

1) Norms. Journals demand formal language, and people assume using big words makes them sound smarter (they're wrong: https://onlinelibrary.wiley.com/doi/pdf/10.1002/acp.1178?casa_token=64c4CaLggG8AAAAA:a4ggfli2l8tRRFx1qZdQF9MtJX65s7aveje3lwzuKvaY4cU4W8Sw4np4tK72secoASwum-H7L8KScLuQ)

2) Dishonesty. You don't want to admit flaws or mistakes, and opaque writing helps you conceal them.

3) Fear. People think you're just not allowed to write like you would talk.

Writing this post, there were some things I could have skipped if I was writing only for psychologists, like explaining what MTurk and Prolific are. I think we're actually worse off for skipping that, because we forget that we're studying people in a pretty weird way (anonymous internet people doing tasks for very small amounts of money), and getting accurate data on these platforms requires a lot of effort.

Of course, some ideas are really complicated, and you might assume that complicated language helps you express those ideas better. I think it's the opposite. Complicated language prevents you from realizing you don't really know what you're talking about. In fact, I think a lot of research just wouldn't happen if people wrote about it in accessible language because they'd realize what they're doing is silly (see: https://experimentalhistory.substack.com/p/psychology-is-experimental-history-4eb).

Case in point: last week a theoretical physicist friend of mine was in town. He doesn't read papers at all. "There are too many of them," he said, "and they're filled with stuff that I don't care about." Instead he shows up at physics departments and says "tell me about the physics you've been doing." This is, in fact, much more like the way science used to be done: https://slimemoldtimemold.com/tag/replication/.

Expand full comment

"the more expertise/knowledge you assume of the reader, the easier you can communicate extremely narrow and nuanced ideas that make up the majority of publications"

I've had the opposite experience: Using technical jargon makes it easy to hide the fact that you haven't figured out what you actually mean. I've run into too many examples where I can't get an author to explain clearly what a sentence or paragraph they wrote means.

And that's one reason why I think this kind of writing is so rare: It's really hard and takes a lot more time than writing in inscrutable technical jargon.

A second issue: If you just come out and say what you mean instead of hiding behind an ambiguous table of regression coefficients and p-values, then someone might recognize that there's a problem with your interpretation of your experiment in terms of what it tells us about the real world.

One piece of the courage it takes to describe your research findings in simple, straightforward prose, is that you make it easy for people to see problems. To do this stuff, I think you really have to believe that it's good for other people to find any problems with your work, so you can fix them in the future (see Adam's essay about Peer Review in which he describes how most people, when peer reviewers say there's a problem, instead of fixing the problem, they submit to a different journal and hope the reviewers there won't notice the problem https://open.substack.com/pub/experimentalhistory/p/the-rise-and-fall-of-peer-review?r=1wiwu3&utm_campaign=post&utm_medium=web).

Expand full comment

"Do you think it’s realistic that science could actually be reformed in this direction?… Is it as simple as more people just deciding to write their papers like this"?

Frankly, to this I reply, why the heck not? What if more and more researchers woke up one day and decided to divert from the status quo of publishing, and instead opted to throw their work out into the world for others to critique, and most importantly, provide feedback on willingly, like Adam and Ethan have done here? What if we slowly started depending less and less on publishing journals to ask curiosity-driven questions without restraints (Is it important? Is it novel? Is it useful?) At the end of the day, publishing journals should be a tool for research dissemination and a forum for discussion – not an academic burden for the researcher conducting the work. Imagine if only things were different...

Edit: pesky typos

Expand full comment

I’m involved with a new platform called MetaROR, which aims to help meta-researchers get feedback on their work in an open way (all peer reviews are made public and all reviewed and revised papers are open access). Consider submitting your work for review in November 2024, our full launch target date. https://researchonresearch.org/project/metaror/

Also, consider creating a similar platform for researchers in your field.

Expand full comment

In addition to the style of the writing, another big difference is the format: screen friendly. The typical publications in 2 columns serif font text published as a pdf is for paper prints and just about the worst experience for reading on a screen, maximizing the scrolling required. It is telling that the format is optimized for a medium that likely represents much less that 1% of the readership.

Expand full comment

Interesting. I thought the lack of serifs in the linked pdf looked very "wrong" (no, i don't know what i mean by "wrong") and it was quite distracting. IMO a serif font, like in this substack post, looks much better.

(I absolutely agree about columns --- the only thing worse than 2 columns is putting all figures at the very end of your paper)

Expand full comment

Agreed. Serif font is an established convention and recent usability guidelines are no longer making clearcut recommendations since modern screen can easily render fine lines.

It might be the combination of size and spacing, but I find pdf papers so hard to read as I age...

Expand full comment

It's not just serif vs non, PDFs were designed to be printed on paper, so they specify the exact font and spacing used everywhere (and typically include the font itself). Screens are different than paper, and a font that looks good on paper may or may not be legible on your particular screen. (narrow strokes might align with screen pixels on one letter, and fall into a blank space on the next letter - that sort of thing.) Web pages are designed for screens, so they are more flexible, and the software that displays them knows the screen, and has fonts that are tuned to that particular screen. If you look at PDFs on really high end, design professional monitors, they look as good as web pages. Most of us don't have that luxury.

Expand full comment

Serif fonts make reading significantly more difficult for people with low vision and those with certain cognitive disabilities. Multiple columns also cause problems, especially for people using screen magnifiers, who may not realize there is another column of text off to the right. So, unless we're only publishing for "abled" people, a single column of text using a sans serif font is more accessible. Oh - and the plain language used in this paper is also far more accessible. Kudos to Adam!

Expand full comment

Ohh interesting. I've never paid attention to that. But now that you mention it, I agree that this font looks better than the one on PsyArxiv. I'll fix that when I upload a new version with some other small fixes. Thanks!

Expand full comment

Great point, hadn't thought of this before

Expand full comment

wikipedia should be the reference:

* Readable on all screens

* Can be updated, but edits are traceable

* Review comments are public and available

* References are a click away

* Popup lexicon

Expand full comment

Super interesting!

I wonder if people who practice gratitude rank different in these studies?

About the everyday complaining: My wife and I made up a rule:

If we complain about something we have to follow up immediately with something positive or something that we are grateful for. It really changes the course of the conversation and the mood of the day!

Expand full comment

It would be cool to find out! I've thought about how gratitude is basically the opposite of this: thinking about how things could be worse, but aren't, and appreciating that.

Expand full comment

Yeah! If 90% of participants imagined how things could be better, I wonder about the

other 10% of Negative Nellies. Would they report an overall higher sense of wellbeing? Have they have lived especially complacent lives (which they may well be totally fine with)? How does their social life differ?

Expand full comment

Interesting! Here’s my interpretation:

Most of what our brains are doing is illuminating a “cone of possibilities” around the present. We are doing this both in search of potential rewards as well as potential threats, but we represent these differently. When we feel safe, the cone broadens, and we consider more potential rewards. When we feel threatened, our cone narrows to the most obvious threats. Narrowing is thus the result of focus.

The potential of a specific monetary reward has the effect of narrowing the cone of awareness, and so people see fewer possibilities for things to be better.

So my hypothesis would be that you can also reduce the generation of “better” alternatives if you first scare people or present them with negative stimulus.

The reason this approach makes sense is if you consider replacing “Google” and “pets” (things a person can’t really change) with things a person can change, like “which way am I look right now” or “what posture am I standing in.” As for myself, once I think about my own posture, it immediately starts improving. Anything i shine the light of awareness on, I start seeing how it _could_ be better. For things outside my control this can produce a sense of frustration or desire, but for things inside my control, like my posture or breath or even facial expression, these things suddenly improve on their own.

It seems as if our brains are always trying to “push” the world into a better state.

Expand full comment

We had a similar hypothesis about controllable vs. uncontrollable things. But plenty of things on the list are more controllable: the music you listen to (listen to different music), your desk (rearrange it), your toothbrush (get a new one), etc., and they show the same effect. They aren't quite as controllable as "the way I'm standing right now," but they are far more so than Google, and the effect is the same.

I think I agree with you though––this tendency may be so useful that it's worth having it on all the time, even for things that don't benefit from it.

Expand full comment

I really enjoyed reading about your studies and appreciate that you make this accessible to regular people like me. My unschooled opinion is that this is a human function that is related to our survival. We strive to do better, and sometimes we do, and this makes us able to adapt, and hopefully improve our lives.

Expand full comment

Check out Seeds of Science! (theseedsofscience.org) - trying to make science writing/reading/publishing fun again!

Expand full comment

Thanks for publishing this, excellent read. Two bits of feedback:

1. Openness correlating with better imagined outcomes doesn't seem all that mysterious to me. Openness is, roughly speaking, how likely you are to say "yes" when someone says "hey, want to try something different?". If you more strongly imagine differentness to be good, of course you're more likely to say yes.

2. My guess for experiment 8 - you were trying to sample the total space of ways in which something could be different. The hypothesis you were so pleased about disproving was

"When asked to list only one difference, people were more likely to list a positive change. But maybe that's just because there are more positive changes than negative ones that they can think of, and they were choosing one at random. What if we asked people to list as many as they can?"

Expand full comment

Yes! Both of these make perfect sense. In particular, I had an intuitive sense that 1 was true, but I couldn't formulate why. This is it! Thanks.

Expand full comment

Amazingly written! I dream of a world in which science is communicated like this by default. I might cite this in my upcoming piece on scientific writing style, I'll keep you posted.

Expand full comment

I am working on my doctorate (in Clinical Psych) and because I love words and love to write, I find myself often getting into trouble for being creative and for using colorful language and metaphors when I write. Given this bias towards the bland in scientific writing, I can understand why can AI system can probably produce passable technical writing. If one takes out the colorful and personal and creative from the end product, who needs a person to do the writing? They just get in the way!

I think at least part of the problem is that the love of words and the love of writing has gotten squeezed out of too many people who "do" science-if it was ever there in the first place! Maybe a creative writing class should be required as part of an higher education degree that is STEM-related.

I loved your paper! I wonder, though, how one moves a bureaucratic behemoth to change? I used to work in a very traditional business industry, and I remember that trying to get anything to change was like trying to turn a supertanker-by the time the turn is made, the need for the turn has been left behind, and at then there is just a new turn needing to be made. I will do my little, probably insignificant, part to change at least my world.

Expand full comment

That's a good point about AI!

I know some people find meaning in struggling to slowly change bureaucratic behemoths, and thank goodness, because somebody's gotta do it. But I think there's a lot of unexplored possibility in just trying to create something better. In clinical especially, I think lots of people are hungry for guidance about their minds and would love to encounter it in colorful language and metaphors.

Expand full comment

Loved the piece! Wish more science was like this.

---

I wonder what would happen if a group of meditation practitioners were given these questions?

Thinking about 'how things could be better' could be seen as a pattern of negative thoughts. I wonder if practicing gratitude is one way of reminding one's self that things could be worse, and how good things already are.

So the hypothesis is that people who practice gratitude would be able to list more ways which something can be worse.

--

I also wonder how this would change based on participant's ideas of linear vs circular time. If participants assume that the direction of the world is 'things always get better', then they might be listing the things that they think more likely to happen.

People with circular conceptions of time may believe that the future could be better OR worse than the past?

Expand full comment

Great questions! I'd like to know too.

Expand full comment

I love this so much. I have long thought that scientific papers and studies are too inscrutable for their own good. And each field has their own lingo, so even if someone were smart and interested in multiple fields, it would be difficult for them to study different disciplines. It's like learning different languages. This is laid out so crystal clear that a layman could digest it without issue. I especially love psychology findings being this easy to understand, because literally everyone could benefit from understanding themselves better.

Expand full comment

Check out Seeds of Science! (theseedsofscience.org) - trying to make science writing/reading/publishing fun again!

Expand full comment

Love this so much. It immediately brings to mind how artists, creators, and makers relate to the work they do. As an artist and crafter, I’ve notice how I feel about my work sometimes depends about how excited I am about the idea in my head, not based on how much time it took or other people’s response to it.

In a past life, I thought I wanted to be a psychometrician, but ended up pivoting to digital design and design research. Now I manage teams of designers and one thing I’ve noticed is designers can do good work and still feel discontent because the thing they made is inevitably not as “good” as the thing they imagined when starting out. This happens for a lot of reasons beyond cognitive bias, but the result is feeling bad about themselves or the work. There’s even a delightful chart about the phenomenon: https://www.dropbox.com/s/1hgv9yewqjt5rar/Photo%20Nov%2015%202022%2C%208%2019%2042%20AM.png?dl=0

Anyway, thanks for the great post!

Expand full comment

Thanks for sharing! There's some evidence people apply the same reasoning to themselves: I don't measure up because I'm not as good as I could imagine myself being, or not as good as the best people I know (https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0000580). It's a good counterpoint to the "better than average effect"––yes, people think higher of themselves than an average other, but that's not necessarily who they compare themselves to.

Expand full comment

That graphic is amazing! So true. Why do it be like that?

Expand full comment

I'm a prolific user of Prolific (heh) and I remember taking one of these studies! Not sure which one is was though. I enjoyed getting this peek "behind the curtain".

Expand full comment

Very cool to hear from you! Thank you for what you do in the service of science.

Expand full comment

I had a lot of fun reading this paper. After reading it I felt I learnt something, and that surely is a much more engaging way of communicating science, while still following all the "right" and expected ways of carrying out proper research. No need to be stuffy to be taken seriously.

I think sometimes researchers write in obscure ways with the purpose of only sharing with a very narrow group of super specialists, but that is not the best way to advance science.

I look forward to reading more articles from you.

Expand full comment

Thanks, Essi!

Expand full comment

Check out Seeds of Science! (theseedsofscience.org) - trying to make science writing/reading/publishing fun again!

Expand full comment

This is the best research article I have read. Ever! Keep up the good work, and thanks for making scientific research and the insights derived therefrom fun, interesting, accessible, understandable. This is the prototype for democratizing knowledge. I am inspired!

Expand full comment

Thank you, Marcus!

Expand full comment

Check out Seeds of Science! (theseedsofscience.org) - trying to make science writing/reading/publishing fun again!

Expand full comment

Unique, insightful, and very funny! As I read this and kept seeing more studies you did I couldn't help but think of the phrase..."but wait, there's more!".

Expand full comment