Monday morning thoughts: functions – what functions?
In this post, I think about the concept of functions-as-a-service, what it means, and what it’s good for.
Serverless, cloud functions, functions-as-a-service. Three terms that we’re hearing more and more about these days, particularly in the context of cloud native. One could almost think that they’re a product of the cloud: the concept would probably not have come about without the cloud as an enabling platform.
While cloud functions and functions-as-a-service are pretty much interchangeable as terms (and I’ll use functions-as-a-service, or FaaS, from now on), some folks like to maintain a subtle distinction between serverless and FaaS. That’s fine, and they have valid reasons. With serverless, it’s not that there aren’t any servers – of course there are – it’s that we don’t have to care about them. It’s similar to the concepts lower down the stack; with infrastructure-as-a-service, we don’t have to care about the physical hardware upon which our virtual machines (VMs) run.
Cloud layer granularities
The difference with FaaS is that the granularity is finer. As you move up the stack, from infrastructure, to platform, to software, to backend, and ultimately to functions, any idea of servers at all, physically or logically, disappears. We move up from VMs that we remain responsible for (infrastructure-as-a-service), through runtimes that we have to be mindful of (platform-as-a-service), through a necessarily complex and stateful platform the intricacies of which we must understand (software-as-a-service) to what is perhaps the ultimate – the platform that we thus far have had to think about has faded away, almost in a Matrix kind of way: “there is no platform“.
What we must think about at the FaaS level are the things that matter: what the function interface looks like, what the function does, and that the function itself is stateless. How the function is provisioned, how it runs, how it’s removed, how it’s scaled – well, we don’t care about that.
And the most interesting part? When the functions we write aren’t being invoked, it’s like they don’t exist. From a financial perspective, this is the underlying truth for the idea of serverless – it’s a term that relates to the business model. If a function doesn’t exist, what possibly could you be paying for?
Therein lies the beauty of FaaS. At least to me, it’s the ultimate in compute agility. I write a relatively short function in a simple editor, test and deploy it, wire it into the event fabric, and then sit back. My account won’t be charged until the function is actually invoked. I don’t have to have anything running to form sockets for incoming connections, or to keep the runtime environment warm. All I must do is think in terms of functions.
The event fabric
What is this event fabric? Well, either by fate or by accident, or, as I like to think, by the sheer success of the protocol that powers the world’s biggest and most scalable web service (the web itself), HTTP has become the universal coupling. The model of HTTP’s request/response mechanism is well understood, has a beautiful and common simplicity when you need simplicity, and a depth to handle complex scenarios when that’s required too.
So one of the yarn types* in the event fabric is HTTP. One can think of this type having two function invocation styles. A direct invocation style, where one piece of software calls another directly. There’s also an indirect invocation style, where one piece of software registers an HTTP endpoint, a callback, to be invoked at a later stage, on an event or the successful (or unsuccessful) completion of some computation. This style has a name which you may have come across – webhooks. It’s a concept that was popularised by Jeff Lindsay, from whom I’ve learned a lot.
Maple Mill, Oldham
*Yes, I’m mindful of the fact that this weaving metaphor is in my DNA, growing up surrounded by a legacy of cotton mills in the heart of the industrial revolution here in the north west of England, a revolution that bootstrapped world industry.
There are other types of event in the fabric beyond the straightfoward webhook. These might be platform or provider specific; an oft-quoted example is from the Lambda offering from Amazon Web Services, where a cloud function can be triggered from an activity relating to an S3 storage bucket (in the example, the function creates and stores a thumbnail image of a picture that’s just arrived in a bucket). Of course, you can imagine other cloud providers having their own technical or business events. Think of all the business events that exist, and that we could hook into, that take place inside an SAP S/4HANA system.
Of course, there are also the more mundane but equally important event types based on timers. Cron and other scheduling systems are alive and well. Even in Google’s App Script environment you can find a timer event based scheduling system to invoke your code at certain intervals.
Beware: with timer-based events, there will come a pivot point where the cost consideration for having functions run at very frequent intervals means that perhaps you want to move back down the stack to a larger granularity of, say, a more permanent container or even small VM.
Distributed and asynchronous
So we have a runtime that we only pay for when in use, a set of functions that – for all practical purposes – don’t exist except when they need to be invoked – and well-known standards that describe the contract against which we must design our computing logic. That’s a pretty nice state of affairs. But what’s it good for?
Well, consider the nature of our business computing today. We’re operating in a hybrid world, with systems on-premise and in the cloud. Business processes exist across system boundaries, and across those different granular layers we considered earlier:
- at the IaaS level, we have traditional ABAP stack runtime VMs (lower than that if they’re on premise on actual physical hosts, of course)
- at the PaaS level, we have services on the SAP Cloud Platform (Workflow, Predictive, Business Rules, etc)
- at the SaaS level we have Concur, SuccessFactors, Ariba and many other cloud software offerings
To be able to define discrete pieces of execution, that then lie dormant until they’re required, is a facility that we’ll find increasingly useful in this world, where processes are distributed, and naturally asynchronous. The concept of publish-subscribe, or “pubsub” for short, has been around for a good while. Even I got in on the act, co-authoring a Jabber Enhancement Proposal for pubsub in Jabber (XMPP), and building an HTTP implementation of pubsub called “Coffeeshop” (which in a round about way caused me to write the Alternative Dispatcher Layer – that’s a story for another time).
The ability to have relatively small pieces of code that do one thing and do it well takes from a philosophy that has been proven to be solid and pretty much ubiquitous – most of the Internet runs on Linux, a flavour of Unix, these days.
I have already found uses for cloud functions, in a FaaS context, in various projects over the past year. Instead of setting up some sort of Common Gateway Interface (CGI) contraption on the back of an existing web server somewhere, I just write the code that does what I need it to do and inject it into the cloud. I don’t have to own a VM, have access to a web server, and – best of all – don’t have to worry about having to add configuration to that existing web server without breaking anything, just to get a callback to be, well, callable. Moreover, I used FaaS in the form of Google Cloud Functions in my Discovering SCP Workflow series, writing a service proxy as a cloud function.
With the natural environment for computing in our SAP ecosystem becoming more distributed and event-driven (or message-driven, gosh, that’s yet another subject for another time), it makes sense that we have the right tools to control or just hook in to the flow, add enhancements and extensions, and even just write “glue code”, perhaps in a microservice sense, to provide the end-to-end solution.
If nothing else, the combination of a well-defined interface that’s required for writing functions for an FaaS runtime, with the natural stateless nature of that runtime, will focus our minds and hopefully help us improve how we write and deliver stable and reliable software.
One function at a time.
This post was brought to you by Pact Coffee’s La Secreta, and the birdsong of an early and peaceful Monday morning.
Read more posts in this series here: Monday morning thoughts.
In Cloud Platforms Business Rules we have taken a decision first or decision interface that has a clear separation between consumer app logic & decision logic a.k.a business rules... imagine if our rule engine runs in a FaaS mode assuming kubeless makes java runtime default..we fundamentally would have a low code computation which key users can tweak with business rules and deploy it as function.. monday morning thoughts...
Still having my coffee and waking up...so maybe it is still too early...but I have not heard of FaaS and how would this be different than microservices? Sounds same/similar as their usage.
Great question, regardless of coffee consumption 🙂
I think of microservices as an development / architectural style, where a solution is broken down into small and discrete parts, which work together, with coordination running over some protocol, usually lightweight. The idea of microservices is independent of any notion or assumption of cloud or payment model.
In contrast, functions-as-a-service is a cloud native phenomenon, with an emphasis on the utility of having the lifecycle of function runtime managed for you.
What these two things have in common is the idea that small is beautiful. It stands to reason that a microservice based solution could be built with a collection of functions running in a functions-as-a-service environment.
That's my interpretation. Does it help?
here is a worth to read blog post on Serverless (comprising FaaS) and how it plays together with PaaS, 12factor apps etc.: https://martinfowler.com/articles/serverless.html
Maybe that gives some additional insight
I can think one thing that will hinder the adoption of FaaS (and I agree with Mr Solomon, i'm pretty sure we did call the exact same proposition microservices but a few years ago, although I also agree we hadn't wrapped up the "how should I pay for this?" side of the equation very well, if at all.) For me the sticking point will be the speed of response, adoption inside any process that needs user interaction will likely add a few extra milliseconds of response time. But for seldom used event driven backend processes, it sounds great!
I will also point out something that was repeated at me ad nausium to me by various cloud sales execs. Companies have a love/hate relationship with flexible. The CIOs love it, the CFOs hate it. It's the CFOs that have to authorise the money to pay for solutions and they want to budget against a fixed price. FaaS will struggle in large enterprise unless we find a smart way to bundle it to a known and expected cost... But I'm sure that's not the first time you're hearing that either!
SAP made a great fuss about the benefits of collapsing the layers to move from an application server & database to an application data server and the value in deep integration/optimisation, this is pretty much the opposite side of the scale. It has benefits, but would be great to balance the one approach vs the other.
Loving this series,
Cheers Chris, great comments. Yes, there is an overlap, but as you rightly point out, the key with FaaS is more the cloud native flavoured operating and consumption model, taking from the more general serverless idea.
Of course, responsiveness is always an important factor, especially in synchronous UI situations. But there are plenty of other asynchronous scenarios (even in a UI context) where functions like this make sense, and then you have the non-UI scenarios too. As you rightly point out, event driven backend processes are good candidates.
On the pricing (CIO vs CFO), you're right. The models are changing, not least from a capex to an opex flavour (huge generalisation of course). So not only do you have this to contend with, but also the models aren't yet clearly defined or understood. But that in a way is also part of the journey; one of the things I sometimes remember is that the nature of what we do is strive for better, which often means newer, and newer shapes. And those shapes aren't completely worked out. But then it's also down to use to have some degree of influence what those shapes will eventually become. Which is great.
On the layering - I have to smile - one thing that my long-toothedness has taught me is that architectures come and go, and then come back again. As long as we don't see CORBA again, I'm quite happy with that 🙂
(And yes, as I'm sure you've picked up, I'm still trying to peddle the "mainframe is back again with cloud" meme!)
Really happy you're liking the series, and thank you for the thoughts - it's people's opinions in the comments that really make the writing worthwhile.
ahhhh come on, DJ....CORBA wasn't "that" bad for what it was trying to accomplish at the time given the muddy mix of technology and standards (what standards!??!!) back then. Heck....CORBA was much nicer than working with remote OLE and DCOM (which I have the scars to prove! haha)
But back to the discussion with you and C.Paine.....it is funny how everything old is new again. And the whole "consumption" vs fixed price discussion as well....it's like no one has ever seen any example of where a consumer pays only for what they consume over a period of time....*cough* utilities *cough* haha......but even from a "techy" perspective, this "use based" billing reminds me of the same ol discussions when web advertising was first finding its place in the world ....."you mean I get billed for page views or click throughs or impressions? What does that all mean?!?!!?!?!" haha
The idea of microservices and/or FaaS is simple enough....leave it to the wallet holders to complicate it so much.
I still remember my CORBA book being mostly used as a monitor stand. Ahh, those were the days.
Seriously though – you’re right, there. I’m reminded of this phrase, which I didn’t realise (until now) was from the old testament:
אֵין כָּל חָדָשׁ תַּחַת הַשָּׁמֶשׁ
“there is nothing new under the sun".
With that said, I do think there is value in rediscovery, especially when it’s either in a different context, or where we’re in a different situation that means that the idea makes more sense / has more traction / stands a better chance of success. So perhaps there’s nothing inherently bad about reinvention.
What I do like about this incarnation of small, lightweight services is that (regardless of who – if anyone – pays for the service) it’s so easy to deploy and make work. And that’s partly due to the cloud native thinking.
Just before, I was talking to an old friend and colleague and within a matter of a few minutes (literally) had whipped up a simple HTTP service that received POST requests and stored the data in a cloud-based spreadsheet for immediate graphing. The fact that I didn’t know or care where the function ran, how it was provisioned, how the runtime was cleaned up again, or even how or where the data was stored – that to me is wonderful because if you look at it with one pair of lenses, you think “well, that’s nothing new”, but if you look at it through another pair you think “well, that’s pretty amazing”.
If you're wondering why the funny formatting.... I did TRY to update it...
I have had the same thing happen now for the past 2 weeks on comments. I end up copying the whole thing....deleting the comment....the adding back and correcting as I want. Something is really messed up with it....AGAIN.
Hey Christopher Solomon and Chris Paine I have experienced this too (and used Christopher's approach as a solution). I've pinged some folks here to see if they can shed any light on this (for me as much as for you!)
and the same lingering problem remains too....I can't "like" your post/comments until I refresh the page again after it loads. Weird.
Thanks DJ for the great blog as always.Monday mornings have become a great opportunity for me to learn too many things:). I was thinking only FaaS, how will this function as a service will be flexible for enhancement. The reason for asking this question has to do with my ABAP background.
FaaS i am trying to compare them with function modules/BAPI's in ABAP which many programs can reuse. Most of them provide the flexibility for the enhancements which can be done based on different organisation parameters. I don't know i might be completely wrong, but selling everything as a service along with the flexibility to enhance that service shall play a great role in the future.
So possible wild solution i was thinking was can we have an API with defined hooks where you can embed your own custom code while calling the API. For example after tax calculation we have a hook named CALCULATION_DONE and whatever function we passed to it gives the output to this function and execute. Whatever is the result than that can be used by FaaS for further processing.
The idea might be stupid also but i think a mixture of named hooks in FaaS with ability to execute the code will bring in the flexibility to the service which i believe every organisation needs.
Or is it already handled via some different way open to all ears.
Indeed, and very good thoughts there Nabheet. Yes, I think that functions can serve many roles, and their ephemeral runtime nature, as Chris Paine alluded to earlier in the event driven comment, almost encourages us to look at their utility in a certain way. Again, we have constraints that would turn out to be a good thing.
I can imagine cloud functions being used as "glue" code (as I've used them in the past), for actions, determinations, validations (yes I'm channeling some other folks and another topic entirely here* but still the pattern fits) and for chains of functions in the style of the Unix command pipeline, a computation model that is as beautiful as it is useful.