What does HackerNews think of localstack?

πŸ’» A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline!

Language: Python

#25 in Hacktoberfest
#15 in Python
#1 in Testing
Having used localstack [1], I can vouch that it's not a joke.

[1]: https://github.com/localstack/localstack

LocalStack is open source: https://github.com/localstack/localstack

Looks like they're starting a Pro version that requires signup/payment.

Not sure if this will meet your needs. https://github.com/localstack/localstack

You can run services locally using docker, e.g. Lambda

It is a thing; unfortunately it's not officially supported but I also find it crazy that so many developers are fine doing dev directly on "the cloud", as if that's a reasonable use of their time.

https://github.com/lambci/docker-lambda

https://github.com/localstack/localstack

https://github.com/localstack/localstack

Why not try to use this toolchain to local build/test your server less application

https://github.com/localstack/localstack

Is a pretty good testing framework that we use at my workplace for a dockerized microservice setup. Requires some config but has mocks for most of the big services (specifically RDS looks to be in their paid tier, but we get along w/ the free version just fine.)

This is really cool! A teammate of mine built something like this for a hackathon once, but far less advanced.

How do you think this compares to using a more pure local solution, like localstack? https://github.com/localstack/localstack

https://github.com/localstack/localstack

Though I find a stronger argument in using containerisation to ensure an identical application run-time environment no matter the infra it’s running on.

If we are talking about infra engineering, then sure. That’s where I find localstack helps

There are a lot of projects these days that aim to use Kubernetes as a docker-compose environment. I personally use http://skaffold.dev/ with either a local Kubernetes cluster built with https://github.com/rancher/k3d or https://github.com/kubernetes-sigs/kind. I think there's a very easy argument to be made that says that running K8s locally is overkill, but what I will say is that if you run your applications locally in K8s, that's one step closer to having your local environment mirror the production environment. Couple that with things like running https://github.com/localstack/localstack locally and you get even closer.
Hello! The first recommendation would be to find a docker image providing a mock for the service you want (e.g. [1]) and if not enough then you could provision those services on demand in an additional step executing before the pullpreview step in the GitHub Action workflow file.

[1] https://github.com/localstack/localstack

So I could use this to setup a local dev env? Instead of using something like https://github.com/localstack/localstack
> For us, increasing the memory for a Lambda from 128 megabytes to 2.5 gigabytes gave us a huge boost.

> The number of Lambda invocations shot up almost 40x.

One thing I've learned from talking to AWS support is that increasing memory also gets you more vCPUs per container.

-----

Serverless is great in scaling and handling bursts, but you may find it VERY difficult in terms of testing and debugging.

A while back I started using an open source tool called localstack[1] to mirror some AWS services locally. Despite some small discrepancies in certain APIs (which are totally expected), it's made testing a lot easier for me. Something worth looking into if testing serverless code is causing you headaches.

[1]https://github.com/localstack/localstack

Started using Lambda 2 years ago now and my thoughts of it have evolved over time. I was initially frustrated by it but a lot of the pain points have disappeared over time (especially no Python 3 support; now supports 3.6).

Some random thoughts and gotchas in no particular order:

1. Don't do your own deployments. Use Serverless.js (https://github.com/serverless/serverless). It's in JS but that doesn't lock you into the nodejs lambdas, it can deploy to any of the runtimes.

2. Use Zappa if you want to host a "serverless website". But frankly, serverless websites are only for hobby projects right now.

3. CPU vs Memory scaling sucks. You have to scale both at once; you can't do high CPU/low memory. This hurts the wallet for compute-intensive tasks.

4. 5-min hard timeout on all functions. Don't use Lambda for long-running tasks.

5. Low-CPU lambdas (128MB tier) are terrible at cold starts if you have lots of stuff to import (which WILL BE THE CASE for Zappa applications!). If a cold start exceeds your timeout (which defaults to 30 sec), your Lambda will never complete and never get out of cold start.

6. If you're playing with S3, PUTs will hurt your wallet. A common application of Lambda (the famed "resize images" example) has a workflow that looks like this: GET on API gateway, PUT to s3, triggers an event which fires a Lambda, lambda processes payload, PUTs result back to S3. If processing your payload is fast, the PUTs in this case could be the most expensive part of your flow.

7. X-Ray is atrociously confusing.

8. No cloudwatch metrics for memory usage. Had to do my own using log metrics. Dumb.

9. There's a cool versioning system for the code artifact, kinda like ECS has. It's really nice to use except that unlike ECS it doesn't have native lifecycle rules, which means you have to do your own cleanups. Dumb.

10. Lambda patterns can be implemented more cheaply and more efficiently using tasks queues and autoscaling EC2s. Amazon's "efficient" compute allocation is outweighed by the margins on Lambda pricing. As with other Amazon services, you're paying for the system to be managed.

11. Aurora Serverless is super interesting, but FWICT nowhere near ready for prime time.

12. API gateway is a solution looking for a problem. If you have to use it (eg. you must trigger your lambda through an http request and you don't want to run a web server), use it in proxy mode.

13. Golang lambdas are very promising. I want to try them out.

14. There's a native canary system in Lambda/APIGW nowadays. Also want to try it out. Had to roll my own before it existed and now too scared to touch the production system.

15. If you hit concurrency issues, your problem might not be load, it might be throughput. Is your DB slower than usual? Common pattern: Increase in Lambda requests hitting your DB, slows your DB down, which in turn slows your lambdas down, which causes you to hit concurrency limits. All these metrics are tightly related.

16. Your tests should not assume a Lambda environment unless your app actually depends on the lambda environment (it generally shouldn't).

17. Use this thing: https://github.com/lambci/docker-lambda - Among other things, you can use it to build binary libs/dependencies you want to run on Lambda (cairo for example).

18. Use this thing: https://github.com/spulec/moto -- And also this thing: https://github.com/localstack/localstack

19. Cloudwatch logs suck. If you want your app to be debuggable, make sure you know how to find logs for individually-triggered lambdas (eg. if you're thumbnailing, you should be able to trace the logs for that one thumbnail back into cloudwatch easily)

20. Use Sentry. https://sentry.io/ - You can use the Sentry extra context parameters to help auditing various things (lambda environment, aws api results you get at runtime, etc)

I haven't used it yet but I've heard good things about localstack[0]. Originally it was in the Atlassian GitHub organization[1] but they moved it over to its own at some point.

0. https://github.com/localstack/localstack 1. https://github.com/atlassian/localstack/