MemoryLess the new ServerLess

Brian Akey
3 min readJun 2, 2020

Just as we have gone from servers to instances, instances to containers, and containers to functions. The idea of memoryless to move low usage services to nonrunning services. With a few small servers, you can provide many apps without the need for many resources. Using memoryless and containers with intelligent load balances services can be ramped up during the day and turned on to memoryless systems at night to save money. It also reduces an application's carbon footprint. Simplicity and demand-based services.

Having worked at many startups I have seen many Internet services that get more traffic from the monitoring service than from actual users.

With many companies needing to find ways to save money this way is an easy solution. Only run things when used.

Monitoring can still be done by using a check on the service's primary script. Check the port and the main script without having to light up any real services and wasting resources.

Code can be added at any time and is available on the next request. It can support many types of languages. Anything that can be run once and handle stdin and stdout. Using alpine the base memory footprint is 40 megabytes. It can be run in an instance, a container, or even an on-demand ECS cluster.

It can also be a great tool for microservices that only need to run infrequently. Many sites see low usage until a company big. Smaller microservices that don’t see a lot of usage are a great example of what can go MemoryLess. We can’t all be Facebook and Google.

Even though the service is simple and uses low resources it can still be scaled up to enable large environments. With current services like react apps we keep the application running in memory even when it is not needed. A huge waste of resources. If the site is seeing thousands of connections per second normal methods work great. MemoryLess is a great idea for anything smaller keeping it on disk is better.

Currently, it takes hundreds if not thousands of gigabytes of ram to get all the site code loaded into memory. With the MemoryLess system, 40 meg is all that is needed to start the service and only enough memory to run each piece of code so one instance or container can run the whole site.

Used in combination with current methods you can balance cost and speed. One example, you can run MemoryLess during the nights and use container pods during the day for speed. Everything works all the time without the need for so much hardware.

What about on-demand or functions? They still have a bring up a whole container and the subsystems like pm2 for node applications and this takes time. With MemoryLess you run the code when you need it. Nothing more and nothing less. Much quicker than starting up a full container. It also goes hand in hand with microservices because code can be simplified into a smaller application. A directory full of files is much easier to administer than building a Kubernetes cluster full of pods using tons of resources.

MemoryLess: simple, and just this side of free.

--

--