June 22nd, 2024

AWS Lambda Web Adapter

The GitHub repository provides details on the AWS Lambda Web Adapter, allowing developers to build web apps on AWS Lambda with features like endpoint support, response encoding, and local debugging.

Read original articleLink Icon
AWS Lambda Web Adapter

The GitHub repository at the specified URL contains information about the AWS Lambda Web Adapter. This tool enables developers to create web applications using familiar frameworks and deploy them on AWS Lambda. It offers features like running web applications on Lambda, supporting different endpoints, response encoding, graceful shutdown, and response streaming. Users can package Lambda functions as Docker or OCI images, or Zip packages for AWS runtimes, with configurations for port settings, readiness checks, compression, and invoke modes. The tool facilitates local debugging, non-HTTP event triggers, and provides examples for various frameworks and languages. For more details, visit the GitHub repository for the AWS Lambda Web Adapter.

Related

NodeSwift: Bridge Node.js and Swift

NodeSwift: Bridge Node.js and Swift

NodeSwift facilitates Swift and Node.js communication, leveraging macOS APIs, SwiftPM, NPM, and Swift for enhanced performance. It emphasizes safety, simplicity, and cross-platform compatibility, simplifying memory management and offering seamless integration.

AdonisJS

AdonisJS

AdonisJS is a TypeScript-first web framework for Node.js, emphasizing type-safety, intellisense, and performance. It offers testing support, official packages like Lucid for SQL ORM, Auth for authentication, and a vibrant community.

Show HN: Local voice assistant using Ollama, transformers and Coqui TTS toolkit

Show HN: Local voice assistant using Ollama, transformers and Coqui TTS toolkit

The GitHub project "june" combines Ollama, Hugging Face Transformers, and Coqui TTS Toolkit for a private voice chatbot on local machines. It includes setup, usage, customization details, and FAQs. Contact for help.

GitHub – Karpathy/LLM101n: LLM101n: Let's Build a Storyteller

GitHub – Karpathy/LLM101n: LLM101n: Let's Build a Storyteller

The GitHub repository "LLM101n: Let's build a Storyteller" offers a course on creating a Storyteller AI Large Language Model using Python, C, and CUDA. It caters to beginners, covering language modeling, deployment, programming, data types, deep learning, and neural nets. Additional chapters and appendices are available for further exploration.

Exposition of Front End Build Systems

Exposition of Front End Build Systems

Frontend build systems are crucial in web development, involving transpilation, bundling, and minification steps. Tools like Babel and Webpack optimize code for performance and developer experience. Various bundlers like Webpack, Rollup, Parcel, esbuild, and Turbopack are compared for features and performance.

Link Icon 12 comments
By @paulgb - 4 months
One word of caution about naively porting a regular web app to lambda: since you’re charged for duration, if your app does something like make an API call, you’re paying for duration while waiting on that API call for each request. If that API breaks and hangs for 30s, and you are used to it being (say) a 300ms round trip, your costs have 100xed.

So lambda pricing scales down to cheaper than a VPC, but it also scales up a lot faster ;)

By @sudhirj - 4 months
So the way we're using this is this:

* We write our HTTP services and package them in containers.

* We add the Lambda Web Adapter into the Dockerfile.

* We push the image to ECR.

* There's a hook lambda that creates a service on ECS/Fargate (the first-party Kubernetes equivalent on AWS) and a lambda.

* Both are prepped to receive traffic from the ALB, but only one of them is activated.

For services that make sense on lambda, they ALB routes traffic to the lambda, otherwise to the service.

The other comments here have more detailed arguments over which service would do better where, but the decision making tree is a bit like this:

* is this a service with very few invocations? Probably use lambda.

* is there constant load on this service? Probably use the service.

* if load is mixed or if there's are lot of idle time in the request handling flow, figure out the inflection point at which a service would be cheaper. And run that.

While we wish there was a fully automated way to do this, this is working well for us.

By @andrewstuart - 4 months
I've noticed any project that involves AWS requires a huge amount of "programming the machine" versus "writing the application".

What I mean by "programming the machine" is doing technical tasks that relates to making the cloud work in the way you want.

I've worked on projects in which tha vast majority of the work seems to be endless "programming the machine".

By @luke-stanley - 4 months
How does this compare to Fly.io which AFAIK wakes on requests in a very similar way with containers made into Firecracker VM's (unless using GPU, I think)? I suppose Fly typically doesn't need to scale per request so has a typical cheaper max cost? I guess you'd just set the concurrency but how well that scales, I don't know.
By @nunez - 4 months
What's the difference between using this vs a Lambda function with an API Gateway app fronting it? I've been doing the latter for many years with Serverless and it has worked very well. Or does this enable you to run your entire web application as a Lambda function? If so, how does it work around the 30s execution window?
By @mlhpdx - 4 months
Help me understand how this could possibly result in a healthier SDLC. Great, it is possible to run the same container on all these services (and probably more given the usual quip about the 50 ways to run a container on AWS). And, I can see that being a tempting intellectual idea and even a practical “cheat” allowing understaffed orgs to keep running the same old code in new ways.

But why? The less it changes the the more vulnerable it is, the less compliant it is, the more expensive it is to operate (including insurance), the less it improves and add features, the less people like it.

Seems like a well-worn path to under performance as an organization.

By @ranger_danger - 4 months
How painful is iterating changes during development on this?
By @tills13 - 4 months
Something I don't really get is that if you're going through the trouble of creating a container, why not just run it on fargate or ec2?

Is it literally just the scaling the 0? And you're willing to give up some number of requests that hit cold-start for that?

By @irjustin - 4 months
Question: this seems to take the port as the primary entry point of communication which means you need to be running a web server?

This adds a lot of cold start overhead? Instead of directly invoking a handler.

By @Thaxll - 4 months
Someone with experiance with Lambda, how does connection pooling works, are tcp connection re-used at all?
By @throw156754228 - 4 months
Does this mean Spring Boot for example has to be spun up and reinitialise itself for every request?