Event-based Computing (AWS Lambda)

September 05, 2017

What is Event-based Computing?

 “An event-driven computing service for dynamic applications,” which essentially allows event-based communication between your app and the cloud without depending on a server to handle the heavy lifting.

 An EBA is a structure based on parts that interact predominantly through event notifications rather than direct method calls. An "event notification" is a signal that carries information about an event detected by the sender. While you're probably familiar with events like button clicks, events can be defined as almost any technologically possible conditions or occurrences. Notifications can be used to carry any type of domain-specific information in any type of system—embedded, GUI-based, distributed, or other.

What is AWS Lambda?

AWS Lambda is a function in the AWS cloud that allows you to run code without provisioning or managing servers. You pay only for the computation time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger other AWS services or call it directly from any web or mobile app. You are only charged for every 100ms your code executes and the number of times your code is triggered.

AWS Lambda is a server-less computation service that runs your code in response to events and automatically manages the underlying computation resources for you. AWS Lambda can extend other AWS services with custom logic, or create back-end services that operate at AWS scale, performance, and security. AWS Lambda can automatically run code in response to multiple events, such as modifications to objects in Amazon S3 buckets or table updates in Amazon DynamoDB.

Lambda runs code on high-availability computation infrastructure and performs all the administration of the computation resources. This includes server and operating system maintenance, capacity provisioning, automatic scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the code.

After you upload your code to AWS Lambda, you can associate your function with specific AWS resources (e.g. a particular Amazon S3 bucket, Amazon DynamoDB table, Amazon Kinesis stream, or Amazon SNS notification). Then, when the resource changes Lambda will execute your function and manage the computer’s resources as needed in order to keep up with the incoming requests.

Pricing Details

You are charged for the total number of requests across all your functions. Lambda counts a request each time it starts executing in response to an event notification or invoked call, including tests invoked from the console.

  • First 1 million requests per month are free
  • $0.20 per 1 million requests thereafter ($0.0000002 per request)

Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.

Free Tier

The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of computation time per month. The memory size you choose for your Lambda functions determines how long they can run in the free tier. The Lambda free tier does not automatically expire at the end of your 12 month AWS Free Tier term, but is available to both existing and new AWS customers indefinitely.

Additional Charges

You may incur additional charges if your Lambda function utilizes other AWS services or transfers data. For example, if your Lambda function reads and writes data to or from Amazon S3, you will be billed for the read/write requests and the data stored in Amazon S3.

Events that can trigger a Lambda function
  • Table updates in Amazon DynamoDB.
  • Modifications to objects in S3 buckets.
  • Notifications sent from Amazon SNS.
  • Messages arriving in an Amazon Kinesis stream.
  • AWS API call logs created by AWS CloudTrail.
  • Client data synchronization events in Amazon Cognito.
  • Custom events from mobile applications, web applications, or other web services.

AWS Lambda with EC2, EC2 Container Services, and Elastic Beanstalk

The following are recommended best practices for using AWS Lambda:
  • Write your Lambda function code in a stateless style and ensure there is no affinity between your code and the underlying computation infrastructure.
  • Avoid declaring any function variables outside the scope of the handler. Lambda does not guarantee those variables will be refreshed between function invocations.
  • Make sure you have set +rxpermissions on your files in the uploaded ZIP to ensure Lambda can execute code on your behalf.
  • Lower costs and improve performance by minimizing the use of startup code not directly related to processing the current event.
  • Use the built-in CloudWatch monitoring of your Lambda functions to view and optimize request latencies.
  • Delete old Lambda functions that you are no longer using.
AWS Lambda Scheduled Events

You can schedule a Lambda function as you usually do with a cron job. The syntax used is the same, you use cron expressions to configure it in CloudWatch Event Rules.

cron(Minutes Hours Day-of-month Month Day-of-week Year)

Cron expression

Invoke Lambda function at 10:00am (UTC) everyday

cron(0 10 * * ? *)


In other scenarios you would like to use rate() instead of cron(). Using rate you provide a value (positive integer and the unit used). 

Cron expression

Invoke Lambda function every 5 minutes

rate(5 minutes)

Invoke Lambda function every hour

rate(1 hour)


Lamda and Errors

We have to keep in mind that we have different options when sending a response from a synchronous Lambda execution.

There are two possible outcomes; we can either succeed(result) or fail(error). Succeed implies a successful execution and fail an erroneous execution. There is a third alternative named done(error, result),  but it is very important to read all of the information. It’s very important to note that done does not change the behaviour of succeed and fail, it merely wraps the two functions. This means that calling done with a non-null error value will give the error in the response just like fail would, and the resulting value is ignored.




  • The Lambda function error response is wrapped and stringified. To work around this we stringify our error before failing the Lambda function.
  • API Gateway will only evaluate our Integration Responses error regexp if the Lambda function fails.
  • Map the response to an HTTP Status code using a well defined string in the error, preferably something that is encapsulated to avoid false positives.
  • Have API Gateway pass the output from the Lambda right through to the client. On the client side we do  JSON.parse(error.errorMessage)to get the error back.
Exposing Lambda Functions as Web Services

Using the API Gateway (AWS Service), you can provide web access to your Lambda functions. The AWS API Gateway is the only way to expose your Lambda function over HTTP. The AWS Lambda web console should create one automatically for you if you use the microservice-http-endpoint blueprint when creating a new Lambda function. 

Amazon provides this service to enable users to 'create', 'publish', ‘monitor’, ‘maintain’ and 'secure' their APIs. It acts as a 'gateway' for the end users to access your application's business logic. If you have an existing public API or are planning to make your application public, you might consider deploying it on the AWS API Gateway to achieve better performance, scalability, and availability with low maintenance and cost. If you are planning to build your business logic/write the application code, you may also like to use Lambda Functions, which is another service by Amazon. In addition to deploying it on the API Gateway and getting rid of server maintenance you can also expose your existing Lambda functions as APIs using the API gateway. Deploying an API doesn’t cost anything. You only pay based on the number of requests your API receives and the amount of the data it sends back.

The Serverless Future

Things to consider in this structure.

Microservices are a way of breaking large software projects into loosely coupled modules which communicate with each other through simple APIs. Microservices seem to be simple to build, but there’s more to creating them than just launching code running in containers and making HTTP requests between them. Here are some important points that you should consider with any new microservice prior to development:

  • Microservices do not require teams to rewrite the whole application if they want to add new features.
  • Smaller codebases make maintenance easier and faster. This saves a lot of development effort and time, therefore increasing overall productivity.
  • The parts of an application can be scaled separately and are easier to deploy.

If you’re not completing said tasks or cycling very quickly, you probably shouldn’t be doing microservices because you’re not getting the true benefits. To maximize the effectiveness of microservices, you need a continuous delivery workflow. This workflow, at the least, must be defined and preferably automated. Automations becomes a requirement as the volume of microservices increases. The controversial question within this field is wondering where do you to test the application. We’re going to talk about testing later, but you do need to have the microservice running in production in order to fully test. These questions fall into three categories: organizational concerns, architectural concerns, and developmental concerns.

  • How will your new service be deployed and upgraded?
  • What is going to be the QA strategy?
  • How will the settings/configuration of the services be handled?
  • How will it be secured?
  • How will it be discovered?
  • How will it scale with increasing load?
  • How will it handle failures of its dependencies?
  • How will the rest of the system handle the failure of the new microservice?
  • How will it be upgraded?
  • How will it be monitored and measured?

While it might not be necessary to have very sophisticated answers to each of these questions, it is important to consider each one and be aware of any structural limitations your microservice may have. For example, your new microservice might first be deployed without any disaster recovery or region failure tolerance and then upgraded later to include that kind of resilience. Being aware of what your microservice both can and cannot currently do is crucial, and knowing the answer to each of these questions will help you continue to tweak and improve it until it evolves into a mature, resilient, and reliable system component.

Every piece of technology has a downside. If we consider microservices on an organization level, the negative trade-off is clearly the increase in the complexity of operations. There is no way a human can ultimately map how all of the services are talking to each other, so companies need tools to grant the visibility of their microservice infrastructure. 



Other Event-based Computing

One of the cloud services out there is Azure.

The Microsoft Azure Service Fabric is a microservices platform that is available provides to every microservices that can be either stateless or stateful. Stateless microservices do not maintain any mutable state outside of any request and its response from the service. Stateful microservices maintain a mutable, authoritative state beyond the request and its response.

There are 2 reasons why stateful microservices are important:

1) The ability to build high-throughput, low-latency, failure-tolerant OLTP services like interactive storefronts, search, Internet of Things (IoT) systems, trading systems, credit card processing and fraud detection systems, personal record management etc by keeping code and data close to each other on the same machine.

2) The application design simplification as stateful microservices removes the need for additional queues and caches that have traditionally been required to address the availability and latency requirements of a purely stateless application. Since stateful services are naturally highly-available and low-latency this means fewer moving parts to manage in your application as a whole. 

Azure Service enables you to build and manage scalable and reliable applications composed of microservices running at very high density on a shared pool of machines (commonly referred to as a Service Fabric cluster). It provides a sophisticated runtime for building distributed, scalable, stateless and stateful microservices and comprehensive application management capabilities for provisioning, deploying, monitoring, upgrading/patching, and deleting deployed applications.

Azure powers many Microsoft services today such as Azure SQL Databases, Azure DocumentDB, Cortana, Power BI, Microsoft Intune, Azure Event Hubs, many core Azure Services, and Skype for Business to name a few. Azure also allows you to start as small as needed, and grow to a massive scale with hundreds or thousands of machines, creating Service Fabric clusters across availability sets in a region or across regions. 

Topics: Development

Previous post Next post