OpenResty is an extensible open-source web platform that helps developers to build high performance web applications. It is powered by the biggest tech players in the world including CloudFlare, GitLab, TikTok, Lyft and many others.
The LotusFlare Server Engineering team extensively uses OpenResty technology for different engineering tasks. It’s a very powerful platform to use efficiently in high-load projects. At the same time, it is easy to use for prototyping, especially when and where development speed matters.
We found that there’s not much suitable information available to newcomers to OpenResty. In this tech blog, the LotusFlare Server Engineering team will provide an introduction to OpenResty to help uncover the benefits.
In this article we will cover
- What is OpenResty?
- In which situations one should use it (whenever one wants to develop web app)?
- How does OpenResty compare to similar languages (in performance and other characteristics)?
- Why choose OpenResty over NodeJS, Go, or Python?
- The OpenResty ecosystem and libraries
OpenResty excels in handling a large number of requests with minimal CPU usage and high I/O. That is the case for most web servers over the Internet. Looking at the type of load on most of LotusFlare web servers, they mainly work with I/O operations, such as accepting requests from the clients, reading data from data sources, and performing requests to other services.
Memory efficiency is achieved as the application does not fork a process or thread per request connection compared to conservative architectures like Apache (maps process per each connection) or a typical JVM based web server (that maps new thread per connection). OpenResty is single threaded; it creates a new lua coroutine to keep the context of request so the memory consumption is much cheaper than new process or thread.
CPU saves resources by avoiding the creation and destruction of new processes or threads per connection, or switching between these threads.
Compared to other web technologies OpenResty shows great performance. Let’s check a request throughput benchmark for a simple example of an echo server implemented on Rust-Actix, NodeJs, Py-Tornado, Go-Http and OpenResty. The simple server app just responds with “Hello, World” content on every request. The programs are run on a computer with Intel Core i7 CPU 2.2GHz × 12, 16GB of RAM running Ubuntu 20.04.
The command to be executed on client ab -kc100 -n1000000. It sends 100 simultaneous requests with keep-alive mode until 1 million requests are sent. The results are shown below.
As we see in terms of web server requests bandwidth performance, OpenResty is on par with the industry leaders. But OpenResty is not just a framework or library to manage HTTP. It is a comprehensive platform that provides a wide choice of toolchains with which none of the other frameworks or libraries can compete.
OpenResty does not fit all the web application use cases. Especially when your program needs to do high intensity CPU computations like machine learning or video processing – such types of work should not be pushed to OpenResty because of its single-threaded architecture. Placing CPU-intensive tasks on the thread will block the other tasks and require one to wait until the original task is completed.
Along with the great I/O performance capabilities, OpenResty is on the cutting edge in other areas that are not as obvious as the first point. Let’s consider some of them.
One of the most mind-blowing (in server engineering terms) and non-obvious properties is that the code looks sequential but is asynchronous under the hood. A developer doesn't necessarily need to use any async mechanisms explicitly as Nginx Lua API handles everything for us. This means that when an app performs I/O operation the execution thread isn’t waiting on the operation to complete before proceeding. Instead, it remembers the context of the current request and picks some other task to process. That makes the code very readable and there’s less worry on how to manage it asynchronously.
Another advantage of OpenResty is that one can do build and deployment cycles in very short periods of time. Once a code change is implemented, that’s basically it. To deploy it, just copy the code to the corresponding destination on the machine and restart nginx. This only takes a few seconds. Lua is an interpreted and JIT compiled language so engineers don’t need to wait until its code is compiled and built before an actual deployment. For big projects, this is a real problem where engineers spend a decent amount of time waiting for the compilation and build managers to execute.
There is also the benefit of OpenResty being open source software. Engineers can contribute to the project by filing bugs, proposing and implementing new features, making performance improvements and many other things. If the main project doesn’t support the features needed, or if it unexpectedly goes into some direction that doesn’t fit the goals of what’s being built, engineers are free to fork and do whatever is appropriate in that case.
Nginx server flexibility
Nginx advanced server features can be applied to the OpenResty application out of the box. Many server properties such as limiting the number of requests, request timeouts, request body size limits, different security parameters and many other cool stuff can be configured in only a few lines in the nginx.conf file.
C language support
As a bonus of using OpenResty, if one needs to handle a special case and one hasn't found a lib one needs, one can always find a C equivalent and enable it using the FFI (Foreign Function Interface). This way one will directly call external libraries, their C functions and use C data structures in Lua code.
If you have never heard about OpenResty, you may think that its ecosystem should be relatively small compared to other stacks, but it’s not. One of the strongest properties of OpenResty is the community and its ecosystem. Almost every use case one will need to satisfy using OpenResty is covered with a library that is supported and maintained by the OpenResty community.
There are core libraries bundled in package, so one doesn't need to separately install them. Some of them:
- lua-resty-string - different String utilities and common hash functions
- lua-resty-websocket - Lua WebSocket implementation
- lua-resty-mysql - Lua MySQL client driver
- lua-resty-redis - Lua Redis driver
lua-cjson - library for fast JSON parsing and encoding support for Lua.
Along with built in OpenResty libs different other databases and storages are also available to be set up for one's needs. The list includes (but is not limited to) :
- lua-resty-cassandra or lua-cassandra (the last one is used in LotusFlare);
In a search of message queuing library, one may find a solution for Kafka, RabitMQ and others. If one wants to find a testing tool for Openresty itself, or different Lua scripts, one may want to check this list.
OpenResty Package Manager
The OpenResty Package Manager (opm) command-line utility is the only thing one needs to install or uninstall the desired library.OPM is the official OpenResty package manager that allows one to use modules contributed by the community(can be found here).
Command line util
One more utility one should pay attention to is RestyCLI. It is a powerful scripting tool that gives one an ability to use Openresty features in the command prompt, i.e, run commands in a context of nginx or use lua nginx API to create various command-line utilities.
If you’re interested in exploring more about Openresty tools, feel free to do it yourself using the official resources.
The above description is a tiny piece of OpenResty and there is more to learn. LotusFlare server engineering team hopes that this was helpful to understand the basic OpenResty features and see how powerful it can be as a tool for any project because of simplicity that Lua provides, performance that Nginx and JIT compilers have, and wide range of platform toolchain maintained by the community.
Thanks for visiting our site and reading this article. LotusFlare has open job positions in the server engineering team! If you are interested in joining LotusFlare, explore our current openings here: https://lotusflare.com/careers/current-openings/
Server Engineering Team