Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

PSA: porting an existing application one-to-one to serverless almost never goes as expected. Couple of points that stand out from the article:

1. Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

2. Use managed services whenever possible. You should never handle a login event in Lambda, there is Cognito for that.

3. Think in events instead of REST actions. Think about which events have to hit your API, what can be directly processed by managed services or handled by you at the edge. Eg. never upload an image through a Lamdba function, instead upload it directly to S3 via a signed URL and then have S3 emit a change event to trigger downstream processing.

4. Use GraphQL to pool API requests from the front end.

5. Websockets are cheaper for high throughput APIs.

6. Make extensive use of caching. A request that can be served from cache should never hit Lambda.

7. Always factor in labor savings, especially devops.



So, to summarize, you should:

1. not use the programming language that works best for your problem, but the programming language that works best with your orchestration system

2. lock yourself into managed services wherever possible

3. choose your api design style based on your orchestration system instead of your application.

4. Use a specific frontend rpc library because why not.

...

I've hacked a few lambdas together but never dug deep, so I have very little experience, but these points seem somewhat ridiculous.

Maybe I'm behind the times but I always thought these sort of decisions should be made based on your use case.

EDIT: line breaks.


The way read above comment is - If you can live with following limitations, then use lambda/serverless, works great. I have got to a point where for any internal systems used by internal users, lambda is my defacto standard. Very low cost of operation and speed to market. For anything that is external facing I prefer not to use lambda, specially if growth of usage is unpredictable.


You are not wrong. But it is all about saving money on labor. The rest are just the constraints of the system you use. (Aka requirements) its like complaining about the need to use posix for Linux.


20 years ago. We had enterprise java, it’s still “there”, but running spring is very different from what it used to be.

You’d simply upload an ear or ear and the server and deployed would handle configuration like db etc.

It worked perfectly (ear/war, the persistence framework was too verbose and high level imo, but that was replaced by hibernate/jpa). There was too much configuration in xml, but that could easily be replaced my convention, annotations and some config

Again.. we are running in circles, and this industry will never learn, because most “senior” people haven’t been around long enough.


> Again.. we are running in circles, and this industry will never learn, because most “senior” people haven’t been around long enough.

And that likely won't change in our lifetime, given the rate of growth in demand for software: we literally can't create senior engineers fast enough for there to be enough to go around.

As an aside, I have the privilege of working with a couple of senior folks right now in my current gig, and it's pretty fucking fantastic.


The percentage of seasoned engineers is so low that 'senior' as a title often seems to stretch to "whoever is most experienced around here". That's probably fine, since people understand that experience is not reducible to that title. But this does bring to mind a metric for finding "objectively" senior engineers:

What's the biggest idea you've seen abandoned and then reinvented with a new name?


Cooperative multitasking => Node


I feel like we're just transferring the labour from ops to dev though. Where I work we still haven't got as good a development workflow with lambdas as we did with our monolith (Django).


I think you're right about this.

Optimistically, it could represent a positive trade-off that replaces perpetual upkeep with upfront effort, and all-hours patching and on-call with 9-5 coding.

In practice, I think a lot of those fixed costs get paid too often to ever come out ahead, especially since ops effort is often per-server or per-cluster. The added dev effort is probably a fixed or scaling cost per feature, and if code changes fast enough then a slower development workflow is a far bigger cost than trickier upkeep.

Moving off-hours work into predictable, on-hours work is an improvement even at equal times, but I'm not sure how much it actually happens. Outages still happen, and I'm not sure serverless saves much less out-of-hours ops time compared to something like Kubernetes.


Paying one guy is cheaper than two. So... You are not wrong :)


Unless your work doubles and you need to hire another dev.


No, that is always the way of work: reduce workers, increase load. We in IT were just spared from it so far. Now it arrives too.


> Paying one guy is cheaper than two.

Now that guy's workload doubled and needs another colleague to be able to deliver.


I see your point though POSIX imposes very few (if any) architecture decisions on application developers. The kind of design choices we’re talking about are very different from those of POSIX-like utilities so I’m not sure if that analogy is a good one.


That's so true and I'm happy people begin to realize that.

The worst for me is the vendor lock in, directly followed by the costs.


There are some HUGE benefits to this type of architecture (services + lambda where required) for large corporations, the main one being an insane reduction in a bunch of worthless crap that you no longer have to do:

- OS version patching, patch windows & outages, change mgmt relating to patching and reporting relating to OS patching

- antivirus, the same patching management and reporting as above

- intrusion detection / protection, host based firewalls, the same patching and management as above

- Other agents (instance health monitoring, CMDB, ...)

- Putting all this junk on a clean OS image any time something changes, re-baking and regression testing everything

This all adds up, and can be a significant cost to an organisation - team(s), licenses, management, etc.


That's basically why we're doing it, and we're seeing some really good costing implications from implementing various things using Azure Functions. Not perfect, but an extremely capable system, and there are things some of our teams are doing with Durable Functions that I was completely stunned by when they explained how it all works. Microsoft have some very good technology there.

The only thing I'm sad about is that you can only host it on Azure. I'd love an open standard for function-like programming and hosting.

Of course, not everything is suitable for this model, and it certainly won't be the most cost-effective or performant way to do it for everything either.


I think the comment is exactly opposite of what you are suggesting.

The comment is saying that Lambda has limitations and works best when considering those limitations. If those limitations don't fit your use case, you shouldn't be using Lambdas - or, at least, don't expect it to be an optimal solution.


Think about serverless as framework-as-a-service. It has a learning curve, but if you buy in, it is an amazing productivity boost.

(If Reddit’s video hosting being built and operated on a serverless stack by a single a engineer won’t convince you, I don’t know what will.)


90% of Reddit videos are unwatchable for me : they start OK then the quality is downgraded to something unwatchable and there's nothing I can do about it.

I even tried to download them with youtube-dl but it doesn't work.


100% agreed. The initial buffering time on them is ridiculous. I've started uploading to streamable and just posting that link rather than upload video straight to reddit.


Makes me wonder in how many other [recursive] ways progress is held back by a lack of it.


In 2010 I was 24 yr old and built a myspace clone SaaS with music and video hosting with everything it implies: large uploads, background jobs, compiling an nginx patch to support range requests, ajax to have videos and music playing while browser, with Django on a 20 bucks/month server. If you're not convinced I don't know what will.


I think the point is, with your Django approach, you'll be stuck doing ops work 9-5 once you start getting customers, whereas with serverless you can spend the time to do more feature dev work.


Not quite sure about that, nowadays I require a couple of 20 bucks/month servers to have one CI/staging and one training/production deployments. I'm at the point where I practice CD as in "deploy each git-push to branchname.ci.example.com and run cypress on it" and still am able to deliver in half a day what the customer would expect to happen in two weeks.

And of course, baremetal provides a much better ROI than VMs/VPS/<insert glorified chroot/jail here>.


You seem to have gotten your deployments down and I really think that's good. In my own experience, though, managing your own infra always works well until it doesn't anymore. And when it stops working well, it crashes and burns and sucks up all the time. Going with managed services like serverless helps to get around that.


that sounds cool, what happened to it? that'd be a fun project to work on today :)


I just had no marketing, no partner for that, but I could rebuild that a similar time frame if only I had a plan to make profit out of it. Made it into a local newspapers but that's was the beginning and the end. It's from the times where I was finding musicians in the streets to pay me 50 bucks per website and hosted them on grsecurity hardened gentoo ... which I don't practice at that cost anymore of course. https://www.charentelibre.fr/2010/02/09/article-5-internet-a...


If Reddit video is the poster child then serverless is in big trouble.

That thing almost never works.


They've already ruined React!


Until he leaves and some poor other dev has to dig into his stack


> I've hacked a few lambdas together but never dug deep

Then why comment? You clearly don't understand the use-case that AWS fits.

I've had jobs that took 18 hours to run on single machine finish in 12 minutes on Lambda. I could run that process 4 times a month and still stay within AWS's free tier limits.

For the right workloads it is 100% worth realigning your code to fit the stack.


>Then why comment?

Because the instruction list from above isn't backed with any solid reasoning and because commenting is what people do on HN.

>You clearly don't understand the use-case that AWS fits.

Pray tell, what is this enviable use case that I so clearly do not grasp?


>I've had jobs that took 18 hours to run on single machine finish in 12 minutes on Lambda. I could run that process 4 times a month and still stay within AWS's free tier limits.

Ok I'll bite. What takes 18 hours to run on a single machine but finishes in 12 minutes on Lambda.


I worked on a service a year ago that would stream a video from a source and upload it to a video hosting service. A few concurrent transfers would saturate the NIC. Putting each transfer job in a separate lambda allowed running any number of them in parallel, much faster than queuing up jobs on standalone instances


Yes but running multiple lambda jobs in parallel would still add upto more time than 12 minutes. What am I missing?


If I was running 10,000 transfer jobs in parallel and the longest of them took 12 minutes, the job would take 12 minutes


Yes but you are still charged for 18 hours of compute time?


That’s true, the cost needs to be factored into the model. But the near infinite bandwidth scalability allows the service to exist to begin with. If every job saturates your up and down bandwidth and takes 10 minutes, and you have 100 coming in a minute, you would need to design a ridiculous architecture that could spin up and down instances and handle queuing on the scale of thousands based on demand. Or you can write a simple lambda function that can be triggered from the AWS sdk and let their infrastructure handle the headache. I’m sure a home grown solution will become more cost effective at a massive scale but lambda fits the bill for a small/medium project


If you throw more resources at a bottlenecked problem, it will go faster.


Right, but without the lambda infrastructure it would be infeasible from infrastructure and cost perspective to spin up, let’s say 10,000 instances, complete a 10 minute job on each of them, and then turn them off to save money, on a regular basis


Isn't that also possible with EC2? Just set the startup script to something that installs your software (or build an AMI with it). Dump videos to be processed into SQS, have your software pull videos from that.

You'd need some logic to shut down the instances once it's done, but the simplest logic would be to have the software do a self-destruct on the EC2 VM if it's unable to pull a video to process for X time, where X is something sensible like 5 minutes.


We developed a web-based tool that described water quality based on your location. We generated screenshots of every outcome so the results could be shared to FB/titter. It was something on the order of 40k screenshots. Our process used headless chrome to generate a screenshot then it was uploaded to S3 for hosting.

Doing that in a series took forever. It took something like 14 hours to generate the screenshots, then 4 hours to upload them all. Spreading that load across lambda functions allowed us to basically run the job in parallel. Each individual lambda process took longer to generate a screenshot than on our initial desktop process, but the overall process was dramatically faster.


The parallelism argument doesn’t pass muster because you can do the same thing with a cluster of free tier t2.micro machines with any good orchestration platform, not just lambda.

This argument is basically: no counterpoint to the original post, but you can do things that are also easy on any other comparable platform.

Tell me again what I don’t understand?


> you can do the same thing with a cluster of free tier t2.micro machines

Not if you already used your free tier. Lambda is X free per month. Free tier t2.micro is X free for first 12 months.


> Tell me again what I don’t understand?

As someone who has done both, it's far, far easier to stand up a lambda than it is to manage a cluster of servers.


This still doesn’t make sense. There are portable systems that do the same, and have fully managed options, such as kubernetes.

In my mind the thing that makes lambda “easier” is they make a bunch of decisions for you, for better or worse. For real applications probably for the worse. If you have the knowledge to make those decisions for yourself you’re probably better off doing that.


> This still doesn’t make sense. There are portable systems that do the same, and have fully managed options, such as kubernetes.

The whole value proposition behind AWS is that they can do it better than your business due (directly or indirectly) to economies of scale. I think Kubernetes is super cool, but rebuilding AWS on top of Kubernetes is not cost effective for most companies--they're better off using AWS-managed offerings. Of course, you can mix and match via EKS or similar, but there are lots of gotchas there as well (how do I integrate Kubernetes' permissions model with IAM? how do I get Kubernetes logs into CloudWatch? how do I use CloudWatch to monitor Kubernetes events? etc).


An unoptimized query?


Why would the processing time differ? Would you have multiple lambdas running different subsets of the unoptimized query?


You could achieve the same with basically any decent concurrency model on a single machine.


Maybe one lambda is timing out to fallback, slowing sagas down. I don't know.


I can't support point 7. enough. People often forget about the cost of labor.

We migrated our company webapp to Heroku last year. We pay about 8 times what a dedicated server would cost, even though a dedicated server would do the job just fine. And often times, people tell me "Heroku is so expensive, why don't you do it yourself? Why pay twice the price of AWS for a VM?"

But the Heroku servers are auto-patched, I get HA without any extra work, the firewall is setup automatically, I can scale up or down as needed for load testing, I get some metrics out of the box, easy access to addons with a single bill, I can upgrade my app language version as needed, I can combine multiple buildpacks to take care of all the components in our stack, build artifacts are cached the right way, it integrates with our CI tool, etc, etc.

If I had to do all of this by hand, I would spend hours, which would cost my company way more. In fact, I'd probably need to setup a Kubernetes cluster if I wanted similar flexibility. By that point, I'd probably be working full-time on devops.


Once you factor in the learning time for AWS per a developer the cost is even higher.

At my previous company we had project with an AWS deploy process that only two developers could confidently use. Teaching a new developer & keeping them up to date was a big time sink.

For comparison we had a Rails app setup on heroku that on day one junior devs were happily deploying to (plus we had Review apps for each PR!)


This is a good point. Expecting developers to understand how to configure service to service IAM permissions and all the other nuances of AWS infrastructure is a fool's errand. Also one of the reasons we started Stackery.



I'm curious. Did you look into Googles AppEngine? It seems to have a lot of the benefits that Heroku offers, but is much cheaper.

Granted that it does impose some limitations, and therefore isn't right for all apps. But it does seem like it would work for a large percentage of web apps and REST api's.


The cost you're talking about is really hard to measure. Were they able to reduce the team sizes and get rid of positions after the change? Did the payroll reduce at all?


Same for us.

- Corrupted build? Reverse button for the rescue.

- SSL? They got you.

- Adding new apps in less than 1m?

and so on ...


How is that any different than running your app in Kubernetes or, heck, even deploying it with ansible?


I also feel the same about point 7.

The big difference we are migrating away from Heroku to Kubernetes for the same reason.


> PSA: porting an existing application one-to-one to serverless almost never goes as expected.

Came here to post this and agree 100%. Moving to Serverless requires evaluating the entire stack, including the server side language choice, and how the client handles API calls.

Often a move to serverless is better accomplished gradually in stages than the quick "lift and shift" that AWS like to talk about so much. Sometimes you can simply plop your existing app down in Lambdas and it runs just fine, but this is the exception not the rule.


> The only valid options for performance sensitive functions are JS, Python and Go.

With custom runtimes that's not the case anymore. I write my lambdas in Rust.

Can't stress (7) enough, would also add 'morale' savings. It can be really stressful for developers to deal with gratuitous ops work.


> Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

Shouldn't this not be a problem if you're doing 10 million requests a day? If you have enough requests, your lambdas should stay hot most if not all the time.


If the lambdas are always hot, what is the advantage over having a server? I thought the big selling point of serverless was not having to pay for long stretches of time where you don't have any requests.


If you have 10m requests uniformly distributed, then yes it’s less of a problem, but that’s unlikely. (Even then lambda containers will be recycled multiple times throughout the day, so there is still a small penalty.)


I built an azure function that runs for free that just pings my .NET MVC pages periodically so they are always hot on my cheap hosting.


You can just use application insights for this. It can also show you the results of the ping over time in a scatter chart.


Oh interesting, maybe I should look into this more, I feel like it would be useful, but one day it just showed up in my projects and spammed out and drowned all my debugs in the log so now I rip it out as soon as I can because it drives me nuts and I couldn't find an easy way to turn down the messaging.


If you're still reading this, you can disable Application Insights during debugging (why would you need it anyway during debugging). To do this you make an application variable like 'EnableApplicationInsights' in your web.config so you can set per environment whether or not it should be on.

Then if this is false, in your Application_Start() you can set this: TelemetryConfiguration.Active.DisableTelemetry = true;


I need to include this into my builds.


Having used serverless a bit, I’ve run into many of the same issues and generally agree with the advice but depending on your use case it may not be worth contorting your architecture to fit serverless. Instead, I’d look at it as a set of criteria for if serverless would fit your use case.

At this point in time the only places I’d use lambda are for low-volume services with few dependencies where I don’t particularly care about p99 latency, or DevOps-related internal triggered jobs.


That's more steps than 'just use containers and ignore the serverless meme'.


I don’t think anybody advocates for rewriting all existing projects as serverless. But if you’re starting a startup, going all in on serverless will let you deliver better products faster. If Paul Graham’s Beating the Averages would be written today, the secret weapon would be serverless, not Lisp.


> going all in on serverless will let you deliver better products faster

Can you show some empirical evidence that supports this? In my experience this is another nebulous serverless hype claim that doesn't withstand scrutiny.


I don’t think it’s possible to produce empirical evidence to prove or disprove this claim, but it’s just common sense: Using managed services leads to writing less code and minimizing devops work, both resulting in more time for feature development that takes less time overall and produces higher quality service (latency, availability, etc). Then there is the added benefit of clear financial insight into the inner workings of the application (one can trace capital flows through functions) which will result in better resource allocation decisions.


> but it’s just common sense

No. There's nothing common sense about it. It only seems plausible if you read the sales brochure from a cloud vendor and have no experience with all the weird and whacky failure modes of these systems and the fact that none of the major selling points of serverless actually work as advertised unless you dedicate significant engineering time to make them work - as the GP comment has demonstrated. The amount of engineering time required to make serverless work quickly catches up to or even exceeds just doing the damn thing the normal way.

And that engineering time is not transferable to any other cloud vendor, and neither is your solution now. So congratulations you just locked your business in.

Serverless only makes sense if you have a fairly trivial problem and operate on really narrow margins where you need your infra and associated costs to scale up/down infinitely.


> Serverless only makes sense if you have a fairly trivial problem

That’s exactly the point. The web application needs of most startups are fairly trivial and best supported by a serverless stack. Put it another way: If your best choice was Rails or Django 10 years ago, then it’s serverless today.


If your best choice was Rails or Django 10 years ago you probably don't have a viable startup today. Why? Because it's 10 years later. Technology moves on and market niches get filled. There are orders of magnitudes more people with the skill to setup a basic CRUD webapp, and about 15 years for the markets that these can serve to have been filled.

As a side note, I've learned that the existence of a large number of viable ways to accomplish a task is a pretty big anti-signal for the desirability of accomplishing that task in the first place. When I started my career in 2000, there was a huge debate over whether the "right" way to develop a desktop application was MFC or .NET or Java or Visual Basic or Qt or WxWindows. The real answer was "don't develop desktop apps, because the web's about to take over". When the big web 2.0 businesses were founded from 2005-2011, there were basically two viable options for building a webapp: Rails or Django. Now that everyone's arguing about microservices vs. Docker vs. Kubernetes vs. serverless vs. Beanstalk vs. Heroku, it's likely that the real answer is "develop a blockchain app instead".


> If your best choice was Rails or Django 10 years ago you probably don't have a viable startup today. Why? Because it's 10 years later. Technology moves on and market niches get filled. There are orders of magnitudes more people with the skill to setup a basic CRUD webapp, and about 15 years for the markets that these can serve to have been filled.

That's... not true. The choice of web stack – and, in fact, the whole software – is just a piece of what a startup may need.

Seriously, look at the list of YC startups on 2018 and tell me if most couldn't use either something like Rails, or a Single Page App In React With A Serverless Backend. And it wouldn't matter one bit.

https://techcrunch.com/2018/03/20/these-are-the-64-startups-...

> it's likely that the real answer is "develop a blockchain app instead".

I hope that was sarcasm.


> The web application needs of most startups are fairly trivial and best supported by a serverless stack.

Pretty subjective statements, I suppose we don't have the same definition of "trivial".

> If your best choice was Rails or Django 10 years ago, then it’s serverless today.

Comparing the features of Rails or Django with serverless is like comparing a spaceship with a skateboard.


Because the django was riding a rail on her skateboard and bumped into a spaceship?


> Using managed services leads to writing less code and minimizing devops work, both resulting in more time for feature development that takes less time overall and produces higher quality service (latency, availability, etc).

Well, not necessarily? This assumes that the implementation is sound but it is not at all uncommon for abstractions to leak which end up causing more pain than they solve


Is there really that much hype? I feel like I haven't heard that much. Serverless isn't even really much of a new thing, there have always been providers that hid the underlying aspects of running a web site on the internet. I think for most people they just don't want to have to worry about patching a machine, rolling logs, watching disk space, etc, if they don't need to.


We tried to go down the serverless path, but it took WAY more dev resources than using ec2.

It is not at all obvious what it just can't be used for. In our case, Julia.


Isn't Netflix serverless?


I will argue the opposite. Startups take on enough risks as it is. Unless your startup requires or is about a novel architecture. Why add more risks with non battle hardened technology.

Software professionals often sees benefits without understanding the tradeoffs.


Lambda is 5 year old technology. This is like arguing in 2011 that startups shouldn’t use EC2, because it’s “risky”.


The technology age isn’t the issue. The issue is how many projects have successfully deployed large scale reliable systems built with Lambda.


The internet is full of success stories if you care to look. My favorites:

- iRobot (maker of Roomba) has been running its entire IoT stack serverless since 2016 (architect Ben Kehoe is a worthwhile follow on Twitter)

- Reddit’s video hosting service is built and operated by a single engineer on a serverless stack


Reddit’s self hosted video is terrible. Using them to advocate serverless is like using Twitter on the fail whale days to advocate Rails.


Thanks. This is good info.


Large scale reliable systems are antinomic with “launching a startup”. You’re going to go through 2 or 3 pivots and as many refactors, large scale is the last thing you want to optimize for.


I thought Paul Graham still recommends lisp and present day would use Clojure so the secret weapon would be Datomic cloud for serverless lisp


The community is to blame for this.

If "serverless heros" are running around promoting Lambda, newcomers will use it without thinking twice...


In tech you either die a hero or live long enough to become the villain.


You forget C++. It’s a great choice for Lambda due to startup times. Python startup time is actually terrible and should be avoided if the call rate is really high. Actually, Lambda instance is reusable and after spinning up it will be used to handle multiple requests (if ones are coming often enough).


I measured startup of the runtimes a long time ago, and back in the days of node.js 010.x at least, Python 2's startup time was twice as fast as Node.js's, and Java's wasn't much worse than Node.js. I don't know how .NET fares compared to Java, but I imagine it's about the same.

Furthermore, eople like to compare runtime startup times, but this tells a very small portion of the story. For most applications, the dominant startup cost isn't the startup of the runtime itself, but the cost of loading code the app code into the runtime. Your node.js runtime has to load, parse, compile, and execute every single line of code used in your app, for instance, including all third-party dependencies.

Compare, for instance, the startup cost of a "hello world" node.js function with one that includes the AWS SDK. At least, six years ago, the Node.js AWS SDK wasn't optimized at all for startup and it caused a huge (10x?) spike in startup time because it loaded the entire library.

I would argue that the only languages that are a really good fit for Lambda are ones that compile to native code, like GoLang, Rust, and C/C++. The cost to load code for these applications is a single mmap() call by the OS per binary and shared library, followed by the time to actually load the bytes from disk. It doesn't get much faster than that.

Once you've switched to native code, your next problem is that Lambda has to download your code zip file as part of startup. I don't know how good Lambda has gotten at speeding that part up.


On "7) Always factor in labor savings, especially devops":

DevOps is not a synonym for "ops" or "sysadmin". It's not a position. DevOps is like Agile or Lean: it's a general method with lots of different pieces you use to improve the process of developing and supporting products among the different members of multiple teams. DevOps helps you save money, not the reverse.

You don't even need an "ops person" to do DevOps.


The Rust runtime has a fast start time as well, FWIW.


Because Rust doesn't have a runtime initialization.


Rust AWS Lambda Runtime author here: while the Rust runtime tends to beat all other runtimes, Go is _very_ close in terms of startup times.



Yep! Thank you so much! I really need to update the documentation, but I think `cross` (https://github.com/rust-embedded/cross), a drop-in cargo replacement, is the probably the best available solution for building musl-based binaries.


I've struggled with this before, will have to take a look at it.


GraphQL makes caching a real bitch.


It might do, bu let for some APIs caching doesn't even make sense.


I haven't thought about step 3 before, but makes sense. Maybe I should show this to the guy who used Google Cloud Functions to upload images in our previous project :)

I guess the reasoning would be that this way the actual time spent in serverless code is shorter and by proxy the service becomes cheaper?


Saves time and money by writing and executing less code + S3 is optimized for this task, so it will always perform better than an ad hoc serverless function.


Number 3 - thinking in events instead of REST actions, can't be stressed enough. Of course, some things must be actions (or another for that is commands), and in those situations, you need something that will turn a command into an event, this is one of the features of CloudState (https://cloudstate.io), which offers serverless event sourcing - you handle commands, and output events that can be further processed downstream by other functions.


As general rules these sound great at first sight, but don’t really address the main culprit from TFA - like for like API Gateway costs a lot more to process n number of requests.


Well, given the feature set of API Gateway compared to a Load Balancer I think it should be expected that it costs more. But that’s also beside the point which is to use managed services to do the heavy lifting. Eg. if you need a PubSub service for IoT, that shouldn’t go through API Gateway and Lambda, there is a specific AWS service for that.


RE: #3. This still requires a Lambda to pre-sign the URL. No?

Granted, this approach is much lighter than uploading an image directly.


If you use Cognito for identity management, then there isn’t even need for that. You can just assign users the appropriate IAM role and you can upload directly from the front end.


> Below is a report for one request, you can see we're using 3.50ms of compute time and being billed for 100ms, which seems like a big waste.

Doesn't sound like your point number 1 is valid at all, quite the opposite.


> The only valid options for performance sensitive functions are JS, Python and Go.

I can think of a number of other languages that would probably easily surpass these, especially on latency.


> 4. Use GraphQL to pool API requests from the front end.

What does this look like in practice? Doesn't this increase response time for the initial requester?


These are usually the read N items from a database type of queries that GraphQL makes trivial to batch together. Will barely increase response time, but will provide a better experience for users on bad connections.


> 1. Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

I always sorta assumed that Amazon pre-initialized runtimes and processes and started the lambdas from essentially a core dump. Is there some reason they don't do this, aside from laziness and a desire to bill you for the time spent starting a JVM or CLR? Does anyone else do this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: