Dear AWS, please let me be a cloud engineer again
The author, an AWS Serverless Hero and principal engineer, criticizes AWS's heavy emphasis on Generative AI over core infrastructure services. They advocate for a balanced approach that values traditional offerings and diverse user needs, urging AWS to prioritize developers' support.
Read original articleThe author, an AWS Serverless Hero and principal engineer, expresses concern over AWS's overwhelming focus on Generative AI (GenAI) to the detriment of core infrastructure services. They highlight how GenAI has taken center stage at AWS events, overshadowing traditional offerings. The author argues that while GenAI has its merits, it should complement existing businesses rather than replace fundamental services. They urge AWS to maintain a balanced approach, acknowledging the diverse needs of their user base beyond GenAI. The author calls for a return to prioritizing core infrastructure, customer feedback, and essential principles like performance and security. Ultimately, they appeal to AWS to rekindle their support for developers who rely on a broad spectrum of services to build and maintain successful applications.
This tagline is representative of every part of the hype around GenAI. It makes it sound like security has fundamentally changed and we all need to re-learn what we know. Everything to do with GenAI is treated like this: we need new security plans, we need AI Engineers as a new job title, we need to completely reevaluate our corporate strategies.
Security in the world of generative AI is not substantially different than infosec has been for a while now: User prompts are untrusted input. Model outputs are untrusted input. Treat untrusted input appropriately, and you'll be fine.
The same goes for "AI engineers", who are in the business of wiring up APIs to each other like any other backend engineer. We take data from one black box and transfer it to another black box. Sometimes a black box takes a very long time to respond. It's what we've always done with many different kinds of black boxes, and the engineering challenges are mostly solved problems. The only thing that's really new is that the API of these new black boxes is a prompt instead of a deterministic interface.
Don't get me wrong, there will be things that will be different in the post-LLM world. But my goodness do the current crop of companies overestimate how large that difference will be.
I keep going back to the basics: Serverless is servers. Machine learning is servers. GenAI is servers. And, from what I've heard, most of AWS revenue is servers and storage.
(For the record: I am also an AWS Hero, and an AWS customer since 2006.)
If nobody wants you to use them, is that because everyone already has as much conventional architecture as they need? Perhaps the new opportunities are all in AI because we've pushed conventional stuff as far as it could go, and we were just rearranging deck chairs.
I'll be honest that, if we've run out of ideas, I dunno if AI really solves any problems I want solved. But even if not I don't see how appealing to AWS fixes anything.
However, what happened is that it became apparent that not everything needs to be big data. Business needs will shine through as they always have and dictate what is truly important.
I'm not afraid of the wave of gen AI. Think of it as the new power tool that just came out that everyone's currently talking about. You'll add it to your toolbox because you don't want to be obsolete. It'll blend into everything else once the hype wave is over.
The problem is, for all it's talk over the last few years, AWS remains a complete non-player in the GenAI space, much less so than Azure. In my opinion the problem is exactly the same as for every other high-level service they've tried to launch. QuickSight, Lex, Polly, Cognito, CodeGuru, SageMaker, etc: they're not good. Nobody ever said "I really like QuickSight, I sure wish it had GenAI capabilities". So when the hastily-expanded QuickSight team(s) then goes on to release 42 different Q enabled SKUs, nobody cares. For various reasons, AWS is organizationally incapable of launching a non-infrastructure product that is simply great, as doing so would take attention to detail and deeply caring about things like UX which are anathema to Amazon.
On the positive side, GenAI model access will be commoditized and part of the basic undifferentiated cloud infra, and AWS will do fine there.
It was an awesome (and awesomely overwhelming) experience, but I completely agree with the author. GenAI EVERYWHERE.
The other topics that the author brought up from re:Invent 2022 were still present, but not without heavy mentions of how AI contributes to them.
That said, I have some predictions that might make OP happier.
DevOps and Platform Engineering is still a hot topic, especially in a world where companies are repatriating back to the data center (or are at least going hybrid). All of the 2010s bare metal tech (Foreman, Ansible, etc) are going to come back in big ways, and Kubernetes consumption will only increase. eBPF and systems engineering is still hot and will really help here for high-performance observability.
Companies that won't repatriate or want to use the cloud for prototyping will want to focus on cost optimization. This requires serious cloud engineering skills (using spot instances and S3 lifecycle policies is table stakes; much more can be done, especially on reporting and automation).
GenAI will help here (super helpful for analyzing time series data and surfacing patterns), but having the fundamentals will always be useful here.
c.f. Google IO keynote this year. I couldn't tell you a single thing Google is launching this year, beyond limited, rushed features where Gemini chat is in a side pane.
And that's not on me: it's because Google literally didn't talk about a single other thing.
And as usual, Google is out of touch and doesn't get the joke, c.f. at the end, Sundar presenting their own count of how many times they said AI.
I sorely miss tech industry of the 00s, I simply cannot imagine ex. 2000 Apple/Steve Jobs falling for this. There's this weird vapid MBA brain drain in charge everywhere. But hey, stonk goes up.
For example, the team / leadership / foundation behind Home Assistant has been pushing AI features hard in the past 18 months or so. This coincides with my feeling that there hasn't been any relevant improvement in Home Assistant's core features and usability — it's in stagnation for over a year now.
This is of course my own opinion, but it makes sense: if a significant share of resources is spent on AI stuff, that share is not available anymore for other needs.
> Your first leadership principle is customer obsession: “Leaders start with the customer and work backwards”. > I’m your customer, and I’m begging you: please let me be a cloud engineer again.
However, as with many enterprise products, the author is not the customer; it’s the user.
The customers are the companies that buy AWS because it’s an essential technology for their strategy. When the whole tech world is talking about generative AI, they want to be there, and Azure seems to be ahead because of the MS deal with OpenAI. (even if they are not ahead, customers' perception matters most).
So basically, what Amazon is trying to do by making all of these conferences and announcements about GenAI is to send a message to their customers: we are ahead on the wave of GenAI and you can still trust that our products are going to help you be on the hype.
> I’m your customer, and I’m begging you: please let me be a cloud engineer again.
Only AWS knows how many H100 GPUs they have, how busy they are. How many people are paying for them, how many people want them and can't get them, and how many people just don't care at all.
It's possible that the focus on GenAI for Re:Invent 2023 wasn't based on any hard data like that, and is really just up to the whims of Adam Selipsky since Jassy moved over, but maybe someone who better knows their planning process can comment.
In private they are truly thirsty for AI applications they can write uses cases on that they even offer upwards of 100K credits for Gen AI purposes only.
I think the technical specialty that will be most at threat from automation by AI would be the exact job that he authored has -- solutions engineers that build commodity cloud infra on AWS, Azure, G cloud, etc.
Look at progressions and range of abstraction between standard sys admin IT work to serverless deployments, especially with IaaC tools.
You can describe your architecture to chatGPT and it can spit out a CloudFormation YAML. It will be rudimentary and poor, but I could see a Gen AI tool offered by cloud providers where al you do is describe your app and then deployed infra on your behalf, and optimize form there.
Not trying to talk down on folks who do this type of work, but sharing my opinion on where I think the author is ultimately coming from.