How to win cloud and influence people
Over the last year at Hamilton Robson, we got a new business opportunity, to work with innovative tech to build a recruitment platform.
We identified several different user flows during our service design sessions and in this blog, I want to cover two of these scenarios and the different AWS services used to implement them.
My aim is to break everything down into simple and relatable examples and include our business impacts — so don’t worry if this is your first-time hearing about some of these services or terminology. However, if you’re already an expert in them, this might give you some ideas for breaking technical communication barriers in teams using real world examples to explain system architecture.
Scenario 1 — Automated User Verification
When a user signs-up there is a range of different verification checks that need to happen before they can fully access the system.
This includes verification on photo ID, right to work documents and qualifications, then also doing continuity checks across all documents — Each check having it’s own business rules, escalation paths and asynchronous integrations e.g. Machine learning, 3rd Party API’s and human fall backs.
Scenario 2— Timesheet Generation
When a user accepts a job on the platform, they will log their start and end time daily within the application.
We then need to generate timesheet reports from this data on either a weekly or monthly basis depending on the job.
We opted for implementing a serverless approach for both scenarios — so let’s start off with: what is serverless?
What is Serverless?
If you wanted to build a web application in the early days of web, you had to own the physical hardware required to run a server.
Then came the ability to rent a fixed number of servers remotely. However, companies would typically over-purchase to ensure that an increase in user activity would not exceed their monthly limits and break applications.
Serverless computing is now an execution model for cloud computing in which provisioning of the server and dynamic allocation of resources is managed by cloud providers such as AWS, Azure, GCP etc.
Developers don’t need to manage these servers, and application owners only pay for the execution time and resources being consumed.
Let’s try to think of this in a real-world example:
If you own your own car, you must maintain it by doing things like: filling it with fuel, changing the oil or replacing brakes.
Services like Uber /Lyft eliminate these needs. Saving you time, money, and effort on the maintenance of your vehicle, while allowing you to focus on what is important — getting from point A to point B.
Serverless works in the same way — freeing up the maintenance time and effort, allowing you to focus on delivering a better customer experience.
Working with Lambdas is a core component in any AWS serverless architecture.
Lambda is a compute service; it allows you to run code for virtually any type of application or backend service.
It performs all the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.
When we look at our first scenario of automated user verification, we can see there are several different checks that we need to perform. Photo ID verification, right to work documents, qualifications and then also doing continuity checks across documents.
Since Lambda is best suited for shorter, event-driven workloads and as these steps are all isolated business functionality, we can separate them out into their own lambda functions to work independently from each other.
However, AWS Lambda alone cannot implement the entire flow of a serverless architecture. It needs several other components.
We know we can use Lambdas to run our custom checks on our user documents, but with the amount of checks we are running, we need to consider how long this will take — do we want the users to be sitting waiting for each stage of the verification checks to take place? Probably not!
This is when we need to start thinking about being asynchronous and working in queues.
When two or more events are described as “asynchronous,” it refers to them not existing or happening at the same time.
For instance, if you were a waiter in a restaurant, once you entered an order on behalf of a customer, you don’t simply stop all you need to do to wait around for the chef to make the food.
Instead you can add the order to a queue for the chef to process the “food-making” task and the waiter can go back to doing other tasks.
Queues have been a core part of software architecture for years, now with trends towards micro-services they have become more important than ever.
SQS is AWS’ fully managed message queuing service that enables us to decouple our serverless applications.
Just think of queues as physical queues you would see in real life:
Let’s bring this back to our scenario 1 use case again:
A user uploads all the documents that they need verified and the documents can wait in the queue to be processed by our different verification services. However, the user does not need to wait on the response from them, they can continue doing anything else, as the verification tasks are running in the background.
This decouples the user from the verification process, so that there is no direct dependency between them.
We know we want to use queues for our verification flow — but maybe this isn’t the best tool to use for our timesheet generator.
Remember, for scenario 2 we want to generate our timesheets on a particular time basis e.g. either weekly or monthly depending on the job a worker is in.
For this we can look at either CloudWatch Events or EventBridge.
CloudWatch Events can be used to schedule automated actions that self-trigger at certain times.
EventBridge is also an AWS service that enables you to respond to state changes in different AWS resources.
CloudWatch Events & EventBridge are the same underlying service and API, but we went with EventBridge as it provides us with some more features.
So for this use case it is quite simple and has very few components.
We have the scheduled configured within EventBridge and during that time period it will run the code we have inside our lambda function to generate our worker timesheets. The schedule is very easy to update and we can reconfigure it at any time to run on another schedule.
To help influence your teams on why breaking your services out into smaller pieces of functionality is so valuable, I’m going to cover the following benefits
- Automatic scaling
- Quicker Iterative Development
- Independent Specialised Teams
- Business Flexibility
Since Lambda manages the infrastructure that runs our code, it scales automatically in response to an increase in people using the system in a single period in time.
For some more context of why something like this is important let’s look at an example:
Ring Alarm has doorbell cameras, and they pick up anytime someone approaches your door. This seems like something that would have fairly consistent use, across all houses globally — right?
But then came Halloween — when people across the world were trick or treating, causing such a spike increase in usage!
I’m sure the Ring Alarm engineering team didn’t realise Halloween would be such a big night for doorbells and that’s why it is always so important that a service scales well to deal with any scenarios or events your team has not considered.
For our recruitment platform, we may see large increases in customers using our service after we push out new marketing campaigns or forge a client partnership and that’s why we always need to make sure our infrastructure is ready!
Another benefit of having our functionality split out is that we can isolate our scaling, if there is a time where we have a larger number of timesheets to process than usual — we can scale up our timesheet service, whilst our user verification service can stay unaffected.
Which ultimately, can help reduce operational costs!
Quicker Iterative Development
With Lambda at our core and the infrastructure required for a lambda being minimal, it leads nicely into proof-of-concept development. You can quickly get something up and running to test your theory, but then you can also continually extend it for production use, without have to re-write or create a new service.
Things are also always changing: new languages and frameworks are always popping up. These new things always come along for a reason, as things will naturally always need updated for security and performance gains!
Having these smaller pieces of functionality, all independent from each other, makes updates and enhancements easier and allows you and the team to adapt to your surrounding environments. Keeping you competitive and always using the latest and greatest technology.
When it comes to communicating the benefit of quicker iterations, this can be so much more than just changes in technology.
It can also mean business flexibility.
With agile customer led design, feedback often leads to requirements changing and so our technology choices have never been more important, in order to update faster, reduce operational impact and stay competitive.
An example of this can be explained with pricing.
When technology evolves alongside a business idea, the pricing model is often embedded deeply into the system. However, as an offering matures, the way pricing is calculated will need to change.
With Serverless, because business functionality is naturally split out, a pivot in the metrics that drive pricing is as simple as running a report on our service logs.
Independent Specialised Teams
When every team is contributing to the same application, one breakage could block other teams from getting work done. This can lead to less experimentation and innovation because the software can be seen as more fragile. Processes may also take longer as you may need cross team approvals.
But when we create smaller more isolated services, teams can become more specialised in their own individual areas and take full ownership over those services. This can also speed up process such as deployments and approvals.
Having smaller services can also allow for more experimentation and creativity because developers are less afraid of their changes affecting other teams.
Having smaller services and smaller teams also gives you more flexibility when it comes to talent management/hiring.
When you have one large service written in one particular language/framework, you can get locked into hiring for that specific technology.
When working on these smaller services, each service could be in different languages and still work together. So, if the current talent pool of workers is in a different language — you are in the position to be flexible, with the new services you are creating.
This can allow you to future proof your talent strategy, enabling you to hire the right people based purely on talent and not specific technology.
I know I’ve only mentioned 4 services throughout this blog, but there are lots more services for different use cases!
If you want to make sure you are using the right tools for your needs, be sure to check out the AWS Well Architected Tool. It is a framework based on five pillars:
- Operational Excellence
- Performance Efficiency
- Cost Optimisation
All of which can help you meet your governance needs within your organisation. The framework allows you to define your workload, conduct architectural reviews and then you can apply the best practices!