Nice proof that serverless might be great for projects with a couple of endpoints.

In real life projects, you get a ton of YAML spaggetti boilerplate code and when you hit the endpoint limit on your deployment you will need to start splitting your code over multiple deployments. Compare this with other frameworks where you can actually program your URL router (Django, Rails & co) and it's definitely a step backward.

When you've past the PoC stage, DynamoDB is the first thing you want to replace with PostgreSQL because you need data integrity and migrations. But that's not your only problem ....

When endpoints depend on each other, then it becomes pretty tricky to deploy, cause you'll need to wait until your first endpoint is deployed before you can deploy your second endpoint. So, you're out of the nice "immutable image deployment" paradigm, which means that your frontend code will have to support both old and new versions of each endpoint, just like your backend code will have to support both old and new data structure versions in DynamoDB, as the deployment of a chain of endpoint will take a while and will not be done atomically. This will slow down development velocity for sure.

If you intend to make a real life project live over many fast iterations, then isolating yourself in a proprietary framework like serverless is the last thing you want to do in my experience.

So yeah, for a two-endpoints "Hello World" project where autoscaling is valued over iteration speed, then serverless and dynamodb might be a solution to consider, as long as you're willing to isolate yourself in proprietary frameworks.

That said, you can have autoscaling with iteration speed using open source frameworks anyway (ie. with GKE) so why bother with serverless at all ? Anything you can do with serverless can be done better with k8s and any actually powerful framework such as Django, Rails, Dancer, perhaps even Laravel, CakePHP or Symfony if PHP is your thing.

But let's face it, 99.999% of projects don't need more than 99.9% of uptime which is extremely easy to achieve with a single dedicated server, which gives you a lot of hardware for cheap (unlike AWS). Once you've outscaled a RAM128GB dedicated server then it's time to consider extracting parts of your app into something like serverless.

Regarding programmatic endpoint handlers, these solve completely different problems. Different to the point that there's no reason that you can't employ one on top of the other.

When you start hitting endpoint limits there's a very strong sign that you've had responsibility creep for a microservice, or even worse you're treating your serverless deployment as a monolith.

Your insistence on integrity issues with DynamoDB is also strange. Religious adherence to ACID is not going to be your silver bullet for application design. Learning how to reason about distributed systems and their eventual consistency is necessary to begin with in most cloud setups.

Anything you can do with serverless can be done better with k8s? Cool, have it provision my underlying infrastructure

The thing is that a lot of times serverless is mentioned as "the future", most people I've seen understand "silver bullet": https://aws.amazon.com/fr/blogs/apn/serverless-containers-ar...

As for provisioning, I mentioned GKE already, but I can throw in some more:

https://www.scaleway.com/en/docs/get-started-with-scaleway-k... https://www.ovh.com/world/public-cloud/kubernetes/ https://github.com/bbelky/hkube https://github.com/kubernetes-sigs/kubespray