In this post, I will share the lessons I have learned over the past year using Serverless to build mobile and web apps for a technical consultancy in Sydney. For each shortcoming, I would also recommend one or multiple solutions.

1. FaaS – Connection Pooling Limit
This limitation is often not mentioned in FaaS conversations. Cloud providers tout FaaS as a solution that can scale infinitely. While this may apply to the function itself, most of the resources your function relies on will not be infinitely scalable.

The number of concurrent connections your relational database supports is one of those limited resources. FaaS’s friendliness towards connection pooling is what makes this problem such a big deal.

Actually, as I mentioned earlier, each instance of your function lives in its own isolated stateless environment. This means that when it connects to a relational database (eg PostgreSQL, MySQL, Oracle), it should probably use a connection pool to avoid reconnecting back and forth with your DB.

Your relational database can only handle a certain amount of concurrent connections (usually the default is 20). Spawning more than 20 instances of your function will quickly terminate your database connection, preventing other systems from accessing it.

For this reason, I recommend avoiding any FaaS if your function needs to communicate with a relational DB using a connection pool.

2. No support for FaaS – WebSockets
It’s kind of obvious. But for those who think they can have cake and eat it too, you can’t expect to maintain WebSockets on a system that is design ephemeral. If you are looking for serverless WebSockets, you will need to use BaaS like Zeit Now instead.

Alternatively, if you’re trying to build a serverless GraphQL API, it’s possible to use Subscription (which relies on WebSockets) using AWS AppSync. A great article that explains this use case in more detail is Running a Scalable and Reliable GraphQL Endpoint with Serverless.

3. FaaS – Cold Start
FAS solutions such as AWS Lambda have shown enormous benefits when solving Map-Reduce challenges (for example, leveraging AWS Lambda for image compression at scale). However, if you’re trying to provide fast responses to events like HTTP requests, you’ll need to take into account the time required by the function to warm up.

Your function resides inside a virtual environment that needs to be generated largely based on the traffic it receives (something you naturally don’t control). This spawning process takes a few seconds, and after your function is inactive due to low traffic, it will need to re-spawn.

I learned at my own expense when deploying the relatively complex reporting REST API on Google Cloud Functions. That API was part of a microservices refactoring effort to break down our larger monolithic web API. I started with a low-traffic endpoint, which meant that the function was often in an idle state. Reports powered by that microservice became slow upon first access.

To fix that problem, I moved my microservices from Google Cloud Function (FaaS) to Zeit Now (BaaS). That migration allowed me to have at least one example at all times (more about zit now in my next post: Why We Love Zit Now and When to Use It Over FAAS).

4. FaaS – Long running processes, don’t worry!
AWS Lambda and Google Cloud Functions cannot last more than 5 and 9 minutes respectively. If your business logic is a long running task, then you need to move to BaaS like Zeit Now instead.

For more information about FaaS limits, please see AWS Lambda Quotas and Google Cloud Functions Quotas.

5. BaaS and FaaS – Losing Infrastructure Control
If your product requirements require some degree of control over your infrastructure, then serverless is most likely the thing to let you down the creek.

Control accurate geo-replication of your app or data to ensure consistent and fast performance globally (there are ways to overcome this in some scenarios. Build a serverless multi-sector, active-active backend solution in an hour) .
Serverless may be lacking in all of the above use cases. However, as I discussed earlier, Serverless is just an extension of PaaS. Leveraging the latest PaaS containerization strategies like Google Kubernetes Engine can get you much closer to what serverless can offer, rather than worrying too much about the scalability and reliability of the underlying infrastructure. Huh.

6. BaaS and FaaS – Compliance and Security
Shares all the common complaints related to serverless cloud. You are passing control of your infrastructure to one or more third parties.

Leave a Reply

Your email address will not be published. Required fields are marked *