Search results

Links from around the web: vibecoding, 60-hour work weeks, smaller internet communities, ethical compromises, and expertise

by Tom Johnson on Mar 2, 2025
categories: ai technical-writingwriting

This post is a compilation of various thoughts and responses to articles I find interesting. I decided to compile these into a single post, similar to a News and notes style rather than separating them into individual articles.

Vibecoding

Not a Coder? With A.I., Just Having an Idea Can Be Enough. By Kevin Roose, New York Times, Feb 27, 2025.

Kevin Roose, podcast co-host of Hard Fork, talks about his discovery of the joy of “vibecoding,” which refers to using natural English prompts with AI tools to build software. Roose writes:

These tools, which include Cursor, Replit, Bolt and Lovable, all work in similar ways. Given a user’s prompt, the tool comes up with a design, decides on the best software packages and programming languages to use, and gets to work building a product. Most of the products allow limited free use, with paid tiers that unlock better features and the ability to build more things.

Roose loves building small apps to improve his life, like a lunch buddy app that suggests lunches based on what’s in his cupboards and fridge. I think vibecoding could be useful for tech writers building sample apps with APIs as a way to test and explore the code. Being novices in the subject matter and coding worlds, tech writers are perhaps the primary beneficiaries of technologies like this.

For experts, however, these tools fall short. In the comments on Roose’s article, many software developers excoriate some of Roose’s enthusiasm, noting that these “toy apps” don’t approach the scale and complexity of professional-grade applications and that Roose’s article might present a potentially harmful mirage of AI replacing programmers.

60-hour work weeks

Google’s Sergey Brin Urges Workers to the Office ‘at Least’ Every Weekday, by Nico Grant. Feb 27, 2025.

This post certainly caught the attention of Googlers, who made many memes about it. I’m glad I prefer working in the office anyway; I’ve always struggled to work from home. (When I do, I feel like I’m locked in my bedroom all day.) 60-hour weeks, though? Part of me likes the idea of throwing myself headlong into an intense and challenging project (even though I’m not building AGI?). It feels good to work hard.

However, I don’t know what happens in tech companies after we reach AGI. My understanding of AGI is that it means AI is on par with expert humans in each field, not Superintelligence. Even so, the closer we get to AGI, the more we can accelerate in our work. As such, we’ll get more done in those 60-hour weeks than we would have accomplished a few years ago. This acceleration is the larger message I’m sensing. We’re racing to achieve AGI, but no one fully knows what this means, how it will help/transform society, and what comes next.

Smaller communities on the internet

The future of the internet is likely smaller communities, with a focus on curated experiences, by Edwin Wong and Andrew Melnizek, The Verge. Feb 25, 2025.

The title captures the gist of the article. Here are a few key quotes:

  • “Consumers crave community, but on their own terms — seeking deeper, more meaningful connections with those who truly matter.”
  • “Nearly half of consumers say they’d rather be a part of a community that doesn’t allow AI-generated content.”
  • “People are abandoning massive platforms in favor of tight-knit groups where trust and shared values flourish and content is at the core.”
  • “53% believe communities should be no more than 200 people digitally.”

I feel the truth of this research. I’ve grown tired of social media and dislike the stream of loosely themed, random content that we scroll through. I stopped posting on Twitter, and although I tried Bluesky, I struggled to find the motivation to post on the platform or read others’ posts. I guess I’m tired of social media platforms. I prefer to read longer, more thoughtful works.

In fact, I’m planning to start an AI book club where we read one book a month and then meet for an hour to discuss it. I like that rhythm of content consumption and community engagement much more.

Ethical compromises with AI

Is it okay? By Robin Sloan, Feb 11, 2025.

This article explores the ethics of AI and whether all the AI slop and destruction of the communication industry is justified by the Superscience discoveries and technological advancements that AI is supposed to bring. Sloan writes:

If an AI application delivers some profound public good, or even if it might, it’s probably okay that its value is rooted in this unprecedented operationalization of the commons.

If an AI application simply replicates Everything, it’s probably not okay.

This language and style Sloan uses emphasizes her humanity — it reminds me of a Descartes essay, where the writer shows her thinking cogs turning on the page as she uses writing as a tool for thinking, deliberating and processing and thinking through ideas. I suspect Sloan purposely turns up her “thinky-ness” dial and hams up the excessively emotive style to contrast with the mechanical, lifeless prose of AI slop.

Ramping up expertise to stay relevant

Why AI has meant more work for us as Technical Writers, by Ellis Pratt. Cherryleaf. Jan 20, 2025.

Ellis addresses the frequent question all tech writers keep asking: will AI replace us? Will tech writers still be needed? Pratt argues that good language models rely on accurate technical documentation; without this foundation of accurate, detailed content, the chat experiences will be poor.

Thus, he’s seen an increased emphasis on companies hiring technical writers. Companies need tech writers to help create the documentation that forms the basis of AI training. They need tech writers to apply their expertise to the content to make sure the AI outputs are accurate and helpful.

My thoughts? I like the argument about experts still being needed in an AI-dominant world. There’s a general sense that junior tech workers won’t be needed as much; we need the experts to evaluate the outputs from AI.

But as technical writers, are we the subject matter experts (SMEs), or are we just kinda smart about a wide range of topics but still reliant on SMEs to verify the finer details? One thing’s for sure: expertise will be a highly valuable skill. Perhaps more than ramping up technical skills, we should ramp up our product expertise and become SMEs for the products we document.

In my own work, I see more value in deepening and broadening my understanding of products, rather than just increasing my technical expertise. As I mentioned in Do developers need code samples in API documentation? I’m not sure that technical expertise around code will be all that valuable compared to product expertise.

What’s more valuable is to understand the data returned by the API, the products, the landscape of related products at a company, and more. It’s easy to put on blinders and focus only on the products I support, without expanding my understanding of additional related domains and products. But if I truly want to become an expert, I need to commit more time to the domain.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.