I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.
I am reminded of this article.
The future of web development is AI. Get on or get left behind.
5/5/2025
Editor’s Note: previous titles for this article have been added here for posterity.
The future of web development is blockchain. Get on or get left behind.The future of web development is CSS-inJS. Get on or get left behind.The future of web development is Progressive Web Apps. Get on or get left behind.The future of web development is Silverlight. Get on or get left behind.The future of web development is XHTML. Get on or get left behind.The future of web development is Flash. Get on or get left behind.The future of web development is ActiveX. Get on or get left behind.The future of web development is Java applets. Get on or get left behind.If you aren’t using this technology, then you are shooting yourself in the foot. There is no future where this technology is not dominant and relevant. If you are not using this, you will be unemployable. This technology solves every development problem we have had. I can teach you how with my $5000 course.
PWAs are cool af and widely used for publishing apps on the App/Play stores. It’s a shame they haven’t been adopted more widely for their original purpose of installing apps outside of those stores, but you can’t get everything you want.
Holy shit… XD
lol Silverlight.
In fairness, a lot of those did take over the web for a time and lead to some cool stuff (and also some wild security exploits).
Honestly I feel like for small data analysis, simple regressions etc., AI is game changer.
Not for the reasons you might think. As a social scientist, my biggest advantage in postdoc applications was my programming capabilities. Now, my PIs think that they don’t matter. I have difficulties explaining to them why a vibe coder shouldn’t do science but it really is impossible for some people…
My company is doing small trial runs and trying to get feedback on if stuff is helpful. They are obviously pushing things because they are hopeful, but most people report that AI is helpful about 45% of the time. I’m sorry your leadership just dove in head first. That’s sound like such a pain.
Sounds like your company is run by people who are a bit more sensible and not driven by hype and fomo.
Hype and FOMO are the main drivers of the Silicon Valley economy! I hate it here.
This is all of tech right now.
My boss has mentioned AI once, and it was when I was asking for guidance on how to take over a twice annual presentation he used to run. He said I could just get the LLM to generate the presentation then edit it as needed.
I did not do that. I just made the damn thing myself because, by him saying that, I realized he had no fucking clue how to do any of this so I didn’t have to either.
Corporate has introduced an LLM we’re supposed to be able to use in lieu of asking an HR representative basic questions, like vacation policy and how to get resources for open enrollment. It… barely works. If you happen to use the magic words, it’ll lead you to a PDF of the policy you’re looking for. I just call or email someone in HR. It’s better to build rapport and awareness between home office and the satellite offices, anyway. Makes things smoother when there’s an actual issue to deal with and people help people they like, even just a little, faster.
My boss was curious about it, so she asked me to show her how to use it. She wanted to have it summarize some data from a spreadsheet. It hallucinated with the very first prompt, and she lost all interest in it (win). Later, I disabled everything copilot I can in our tenant and implemented policies to hide/remove it from our PCs. I’ve only had one employee (one of our fire captains) complain about it so far, but I don’t feel bad as his boss was complaining about the AI slop he was turning in on fire reports.
I don’t really see ChatGPT or others popping up in our network traffic either, though I don’t know how/if they are using it on personal phones.
My situation might be different than the average fedi user though… not a very technical group other than me (one guy IT department).
Most blessed outcome. Lucky!
My company has 2 CEOs. One of them doesn’t ever really talk about AI. The other one is personally obsessed with the topic. His picture on Teams is AI generated and every other day, he posts some random AI tutorial or news into work channels. His client presentations are mostly written by ChatGPT. But luckily, nothing is being forced on us developers. The company is very hands-off in general (some may say disorganized) and we can pretty much use any tools and methods we choose, as long as we deliver good results. I personally use AI only occassionally, mostly for quick prototyping in languages and frameworks I’m unfamiliar with. None of the other devs are very enthusiastic about AI either.
The most technically illiterate leaders are pushing the hell out of using for things that don’t make sense while the workers who know what they are doing are finding some limited utility.
Out biggest concern is that people are going to be using it for the wrong stuff and fail to account for the errors and limitations.
I can only speak for my use of it in software development. I work with a large, relatively complex CRUD system so take the following as you will, but we have Claude integrated with MCPs and agent skills and it’s honestly been phenomenal.
Initially we were told to “just use it” (Copilot at the time). We essentially used it as an enhanced google search. It wasn’t great. It never had enough context and as such the logic it produced would not make sense, but it was handy for fixing bugs.
The advent of MCPs and agents skills really bring it to another level. It has far more context. It can pull tickets from Jira, read the requirements, propose a plan and then implement it once you’ve approved it. You can talk it through, ask it to explain some of the decisions it made and alter the plan as it’s implemented. It’s not perfect but what it can achieve when you have MCPs, skills, md files all set up is crazy.
The push for this was from non-tech management who are most definitely driven by hype/FOMO. So much so they actually updated our contracts to include AI use. In our case, it paid off. I think it’s a night and day difference between using base Copilot to ask questions vs using it with context sources.
What happens when anthropic triple their prices and your company is totally dependent on them for any development work? You can’t just stop using it because no in house developers, if there are even any left, will understand the codebase.
To the same point as lepinkainen, we are fully responsible for the code we commit. We are expected to understand what we’ve committed as if we wrote it ourselves. We treat it as a speed booster. The fact that Claude does a good job at maintaining the same structure as the rest of the codebase makes it no different than trying to understand changes made by a co-worker.
On your topic of dependency, the same point as above applies. If AI support were to drop tomorrow, we would be slower, the work would get done all the same.
I do agree with you though. I can tell we are getting more relaxed with the changes Claude makes and putting more blind trust in it. I’m curious as to how we will be in a years time.
As a disclaimer, I’m just a developer, I’ve no attachment to my company. This is just my take on the subject.
Not OP but:
In our company programmers are still fully responsible for the code they commit and must be able to explain it in a PR review.
It just speeds up things, it doesn’t replace anyone.
Simple things that would’ve taken a day can be done before lunch now, just because it’s just prompt + read code + PR (full unit and integration test suites ofc, made by humans)
It just speeds up things, it doesn’t replace anyone.
Oh, my sweet summer child.
Let’s see, we’re understaffed even now and with AI we can kinda keep up.
But I’ll eat my liquorice shoe like Chaplin if this turns into massive layoffs 🫠
I am very, very concerned at how widely it is used by my superiors.
We have an AI committee. When ChatGPT went down, I overheard people freaking out about it. When our paid subscription had a glitch, IT sent out emails very quickly to let them know they were working to resolve it ASAP.
It’s a bit upsetting because may of them are using it to basically automate their job (write reports & emails). I do a lot of work to ensure that our data is accurate from manual data entry by a lot of people… and they just toss it into an LLM to convert it into an email… and they make like 30k more than me.
It has absolutely no involvement anywhere which is good.
Just use it to generate the kind of work he does, so that you can prove his own worthlessness
Our devs are implementing some ML for anomaly detection, which seems promising.
There’s also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don’t mind. Stuff like “make a graph of usage on weekdays” and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said “I can regenerate this report with actual data if it is made available to me”.
As someone who has done various kinds of anomaly detections, it always seems promising until it hits real world data and real world use cases.
There are some widely recognised papers in this field, just about this issue.
Once an anomaly is defined, I usually find it easier to build a regular alert for it. I guess the ML or LLM would be most useful to me in finding problems that I wasn’t looking for.
I work in social work; I would say about 60 percent of what I do is paperwork. My agency has told us not to use LLMs, as that would be a massive HIPPA nightmare. That being said, we use “secure” corporate emails. These use Microsoft 365 office suite, which are copilot enabled. These include TLDRs at the top, before you even look at the email, predictive texts… and not much else.
Would I love a bot who could spit out a Plan based on my notes or specifications? absolutely. Do I trust them not to make shit up. Absolutely not.
Apparently a hospital in my network is trialing a tool to generate assessment flowsheets based on an audio recording of a nurse talking aloud while doing a head to toe assessment. So if they say, you’ve got a little swelling in your legs it’ll mark down bilateral edema under the peripheral vascular section. You have to review before submitting but it seems nice.
You’re right, that does seem very nice.
The organization I work for uses it but they’re taking a very cautious approach to it, we are instructed to double, triple check everything AI generated. Only use specific tools they approve for work related matters as not to train LLMs on company data and slowly rolling out AI in specific areas before they’re more widely adopted.
Double and triple checking everything takes longer than just doing the work.
I’m in software. The company gives us access and broadly states they’d like people to find uses for it, but no mandates. People on my team occasionally find uses for it, but we understand what it is, what it can do, and what it would need to be able to do for it to be useful. And usually it’s not.
If I thought anyone sent me an email written with AI, I would ask them politely but firmly to never waste my time like that again. I find using AI for writing email to be highly disrespectful. If I worked at a company making a habit out of that, I would leave.







