Loading…
From the Network
Releases

What’s new at Stack Overflow: February 2026

This month, we’ve launched several improvements to AI Assist, opened Chat to all users on Stack Overflow, launched custom badges across the network, and launched one of the first community-authored coding challenges.

What’s new at Stack Overflow: January 2026

For this first edition of the new year, we’re taking a step back to highlight some of the most impactful features shipped over the last year and how they can help you start 2026 strong.

Your 2025 Stacked: A year of knowledge, community, and impact

From tough questions to standout answers, your team built a lot in 2025. Your 2025 Stacked brings those contributions together in one shareable snapshot—celebrating the people, posts, and topics that defined your year in Stack Internal.

Latest articles

How everyone and anyone can use AI for good

There are big hitters in the AI space that use this tech for humanitarian and environmental good—from start-ups fighting climate change to voice recognition experts diagnosing diseases. But you don't need to be backed by AWS or Microsoft to do good. In part two of this series, we dive into how anyone can use AI for good.

Is anyone using AI for good?

In a world where AI is replacing human workers, using up energy and water, and deepening disconnect, is AI for humanitarian good even possible? The answer is yes. In the first part of this two-part series, we're taking a look at just a few AI do-gooders and what they're doing to fight climate change, make healthcare more accessible, and help their communities.

More Podcast
Around the web
amplifying.ai

What Claude Code actually chooses

What the AI you choose chooses is not your choice.

plugsocketmuseum.nl

The museum of plugs and sockets

How many breakers could you blow if you plugged all of them in at once?

line-mode.cern.ch

The first webpage ever published

There's an alternate timeline where we're all in the gophersphere instead of the World Wide Web.

youjustneedpostgres.com

You just need Postgres

....except for the cases you’d need to use other databases, too.

chrisloy.dev

AI makes interfaces disposable

Maybe the UI was really the agentic friends we made along the way.

victoriaritvo.com

Semantle solver

Ask yourself—is it more work to create a Wordle solver than to just solve the Wordle?

spawn-queue.acm.org

What every experimenter must know about randomization

Your randomization is not so random after all.

rkirov.github.io

Learning Lean: Part 1

The sequel of the beloved "If You Give a Mouse a Cookie" is called "If You Give a Mathematician an IDE.”

css-doodle.com

CSS doodle

For when you want to feel like you're in high school again by pretending to be productive by doodling in your workbook.

theshamblog.com

An AI agent published a hit piece on me

Is this the start of the human vs. AI flamewars we were forewarned about?

nesbitt.io

Sandwich bill of materials

This is is great for beginner sandwich builders, but doesn't cover the complexity of when a user wants the Dutch Crunch add-on.

o16g.com

Outcome engineering

They make take our vibes, but they will never take our creation!

Want updates to your inbox?

Every week we’ll share a collection of great questions from our community, news and articles from our blog, and awesome links from around the web.

Read previous issues →

or edit your settings on your profile page.

Issue 318: The year of the AI developer

Happy New Year! We’ve just entered the year of the Fire Horse on the Lunar calendar. Now, not everything needs to be a metaphor for the AI revolution, but if the Lunar forecast is correct, the year of the Fire Horse will bring rapid transformation and intense energy—and it sure does feel like a Fire Horse year to us. On the pod, we’re joined by Shireesh Thota from Microsoft to chat all things Azure databases including how the architecture will change with AI. Wikimedia Deutschland’s Philippe Saade sat down with us to discuss how they vectorized 30 million entries during their Wikidata Embedding Project to fight scraping and meet AI need—a very Fire Horse move. To prove we’re not just horsing around about scraping, we’ve got an episode of Leaders of Code with Cloudflare’s Will Allen that dives into how we partnered with Cloudflare to launch a pay-per-crawl model. And it’s not just us feeling the heat from the Fire Horse. From the web, we have the story of a FitBit and a sleepless dev who realized AI is transforming how we interact with interfaces. Even the ancient art of mathematics is feeling a change—one PhD mathematician/programmer is learning Lean to keep up with theoretical mathematic’s shifts because of AI. One thing will always stay the same, though: developers love to solve problems the hard way—at least that’s what the story on creating a solver for a Wordle variant sounds like to us. But even in the year of the Fire Horse, we know you’re looking to us for trusted answers. And actually, the AI trust gap is a big problem for developers; we have the deep dive on the blog. So, we’re going to take a page out of the Metal Ox, known for honesty and dependability, and end this Overflow with our trusty and dependable Q&A. What does it mean for something to be “natural”? Is it wrong to ask math people to pick a lane? Are there quokkas in space? Is it just cope to pretend you know Gen Alpha slang? We’ll try to meet your astrological expectations of us in issue 318—luckily, things with the number 3 bring good fortune for the Fire Horse. That good fortune must be starting already because auspiciously for you, we’ve got all those links and more ready below.

Issue 317: The moral quandary of AI

This is probably not news to you, but the tech world is having a moral quandary lately. Wherever you stand in the ethical and philosophical AI discourse, we’re right beside you with our chins on our fists à la The Thinker. On the pod, Professor Tom Griffiths from Princeton’s AI Lab joins us to detail the philosophical and mathematical history of understanding the human mind, and how these discoveries underlie our development of AI. We also chat with Deepgram’s Scott Stephenson on how they’re advancing voice AI technology, and where voice cloning fits into the ethical dilemmas of this day and age. On the blog, we’re taking an optimistic look at the philosophical AI conundrum. For instance—what if AI will actually create more developer jobs in the long run? We’ve got a piece this week covering how AI’s need for innovation and code will lead to more creative opportunities for developers in every layer of tech, from hardware to application. We’re also wondering—is anyone using AI for good? We’re answering that in a two-part deep-dive on companies using AI for humanitarian good, plus how the every day you and I can use this tech to make the world a better place. But not everyone around the web is as optimistic as our blog this week. We’ve got the story of how one engineer had an AI agent write a hit piece on them after their public critique of the agent’s code—certainly a valid reason for pessimism. But regardless of your outlook on AI, morally or otherwise, the tech is here, which is why this week "we've included the outcome engineering (o16g for those who want to compress the middle of long jargon) manifesto that lays out the 16 rules for the next chapter of software engineering. And not every ethical and philosophical debate needs to be on AI—there’s plenty of other moral arguments to consider from this week’s questions. Is it immoral for your D&D character to attack a solar body if it’s malicious? Is it wrong for Hollywood to label everything as a “true story” if only part of it is true? Where is the line between working and doodling if it’s all in CSS? Will I condemn the universe if I open a portal with my mind? They probably didn’t teach you any of that in Philosophy 101 but don’t worry—we have all of that and more in the links below.

Issue 316: A technological 2-for-1

It’s time for a classic Stack Overflow Q&A. Q: What’s better than one interview from the floor of re:Invent? A: FOUR interviews from the floor of re:Invent. Also, this question is now closed for being off-topic. Okay, okay, fine, let’s try to stay on-topic—namely the topic of AI. On the pod this week, we’re bringing you chats with Inception’s Stefano Ermon on the power of diffusion models and Roomie’s Aldo Luevano on building physical and software AI with a purpose and real ROI. We’re also joined by Pathway’s Zuzanna Stamirowska and Victor Szczerba to dive into the world’s first post-transformer frontier model, and Mary Technology’s Rowan McNamee to chat about LLMs in the legal world—we’ll have to ask him if this week’s 4-for-1 podcast deal is so good it should be illegal. Speaking of The Law, we consult the Law Stack Exchange as to whether social media grifting is grifting at all—plus the answer to your burning question on what happens to rocket ship boosters that don’t burn up. Not everything is rocket science, though. For instance neural networks, especially since we have a visualizer for you this week that’ll demystify those mystifying robot brains. Let’s stay on-topic and continue our demystification—we’ve got the story of one dev’s attempt to find what’s on the other side of Google’s 8.8.8.8 DNS. Maybe we owe all the mysteries around the tech we build to the complexity we’ve been adding to it, which is probably why one of the stories from the web this week is on Wirth's Law of lean software. Oh no, we’ve gone off-topic again. Well, we tried our best. And don’t worry, we’ve got plenty of on-topic and not-closed questions to round out this week’s off-topic Overflow. Is impersonation the highest form of flattery if you’re impersonating a Window’s user with lower privileges? Can you “just bumping this thread!” and “quick follow-up on this!” your way into faster code review? Should you let AI kill your darlings if your darlings are all trash? All of those wonderfully on-topic answers ready for you in the links below.

Issue 315: Are developers stuck in Groundhog Day?

You may not be Bill Murray, but we bet you sometimes feel stuck in a Groundhog Day. Alas, that is the life of a dev. But don’t worry, you won’t have to help every person in Punxsutawney to escape your software Groundhog Day—you just need to read this Overflow. If those same-old vibe coding errors are driving you mad, we have a blog from CodeRabbit’s David Loker on stopping AI-generated incidents. If it’s writing your frontend’s HTML that feels like a time loop, check out our pod with Chris Coyier from CodePen and CSS-Tricks. You might be surprised by how fresh today’s CSS will make you feel. Maybe it’s all the Up Enter Up Up Enter spamming you’re doing that’s got you acting like Phil Connors; if so, we have a story from the web on updating your workflow using a make.ts file. Sometimes, a little of the old isn’t so bad, even if it’s repetitive. For instance, we love the old tech from the Computer History Museum—now available for digital viewing. And sometimes the new is what’s scariest, like the recently discovered malware that was allegedly 100% vibe coded. Hopefully that particular agentic workflow isn’t available in the agent skill directory we have in this issue. All right, you’re not out of the loop yet. You’ll probably get at least a little deja vu, because as always we’re ending this issue with some Q&A. Is running away from your problems a viable option (in D&D, we mean)? Why can’t I get all the achievements in “Kirby and the Forgotten Land” (this one’s just a skill issue)? What happens when the bodies hit the floor (theoretically, of course)? Why won’t people do extra work for free (you can guess the answer to this one)? Rise and shine, campers, and don't forget your booties 'cause we’ve got all of that for you and more in the links below.