Jade is an open-source framework that makes it simple to deploy and maintain JAMstack applications on AWS cloud infrastructure. The JAMstack is a web development architecture that utilizes modern tools and practices to make web apps fast, secure, and highly scalable. This case study examines how Jade abstracts away the time and complexity of writing backend code related to the underlying infrastructure, allowing developers to focus on building their applications.
2. Using Jade
To get started with Jade, run:
npm install -g @jade-framework/jade
Then, navigate to a directory where you would like to store the
.jade folder, which will contain your Jade private key.
To explore Jade, make sure you have a public GitHub repository available. Jade has been tested for use with Gatsby - to get started, you may update the GitHub repository with our Gatsby template here or follow the official Gatsby instructions to set up a Gatsby project here.
When ready, run:
This will check you have the right AWS settings, prompt you for your GitHub URL and provision the relevant AWS infrastructure for you.
Here is a list of Jade commands available:
|Initialize a new JAMstack app and associated AWS services|
|Add a new JAMstack app|
|List all your existing JAMstack apps|
|Freeze your EC2 instance when you aren't developing your app|
|Unfreeze your EC2 instance to continue development|
|Remove an app and its associated AWS infrastructure|
|Remove all apps and all Jade AWS infrastructure|
For notes on these commands, please visit the documentation on our GitHub page.
The following chapters will go into detail about the JAMstack architecture, Jade and how we expanded on the functionality of Jade.
3. What is the JAMstack
To understand Jade, we first have to examine two typical web app architectures before learning about the JAMstack and how it differs. If you are already comfortable with web app architectures and the JAMstack, click here to dive straight into launching a JAMstack web app.
3.1 Web app architectures
3.1.1 Static websites
The simple nature of these web sites allowed for a simple architecture but this simple architecture meant limited functionality. Every user saw the same static pages and there was no way to dynamically source content.
3.1.2 Standard web apps
Web architectures subsequently evolved and introduced application servers tasked with sourcing data and dynamically building pages at runtime.
This improved the capabilities of web sites immensely but the increased complexity also introduced tradeoffs related to performance, security and scalability.
3.2 Serving content from a standard architecture
3.2.1 How web pages are served
3.2.2 How web pages are dynamically built
If the page the client is requesting includes content from a source such as a database, the web server will send a request to an app server, which will reach out to that data source, dynamically build pages, and send them back to the client.
3.2.3 The tradeoff
Dynamically building pages allows content to be sourced at runtime but also introduces a degree of complexity that comes with tradeoffs:
- Performance: It takes time to source data and build pages
- Scalability: The infrastructure that handles the build must be scaled based on traffic
- Security: Increased runtime infrastructure opens up increased surface areas for attack
Web page speed, the time it takes to fully display content on a page, is an important factor to consider in relation to both user engagement and search engine optimization. Almost 50% of users expect a web page to load within 2 seconds and if a page takes longer than 3 seconds to load, statistically 53% of users are likely to abandon the site.6In addition, Google and other search engines consider performance an important metric for their search rankings.
3.3 New architecture for certain use cases
3.3.1 Building pages pre-runtime
It's obvious that modern web applications require the ability to source data, build pages, and serve these pages to clients. So let's now take a look at how we could potentially modify the architecture to retain this ability yet mitigate the associated tradeoffs in performance, security and scalability by sourcing data and building pages before runtime.
3.3.2 Static site generators
Static site generators (SSGs) are a site building tool, the main purpose of which is to source content, apply that content to templates and generate web pages. Popular examples of SSGs include Hugo, Jekyll, and Gatsby.
SSGs generally apply 4 processes:
- Compile: Souce content and generate pages
- Minify: Reduce the size of files by performing code optimizations. This includes removing unnecessary white space and comments as well as shortening variable names
- Transpile: Convert ES6+ code to ES5 in order to remain compatible with all major modern web browsers
The main goal of this process is to generate pages in advance and eliminate the need for an application server to dynamically build pages at runtime.
3.3.3 Advantages of pre-building pages
The most significant result of building pages pre-runtime is the inherent decoupling of the request process from the build process. The overhead of generating pages now becomes unrelated to site traffic and is rather handled independently.
The elimination of dynamic builds at runtime also addresses the three issues we previously discussed, namely:
- Performance: The entire site can be served directly from a web server or Content Delivery Network (CDN) without any hold up due to building pages at request time
- Scalability: Since the process of compiling pages has been decoupled from the request/response cycle, the need to scale application servers and related infrastructure in response to site traffic is eliminated. The entire site can be served via a CDN, which is inherently optimized to scale
- Security: Removing infrastructure from the runtime equation and serving pre-built static pages from a CDN removes the majority of malicious attack vectors
3.3.4 Static web apps with dynamic functionality
Since we're suggesting an architecture that pre-builds the entire web app and serves it as static assets from a CDN, how do we implement dynamic functionality at runtime? In the standard web app architecture, the application server and database are responsible for providing dynamic functionality. To transition to a serverless model, this functionality can be abstracted to APIs and serverless functions.
This simplified architecture where pre-built sites are served directly from a CDN and dynamic functionality is abstracted to APIs and serverless functions is commonly known as the JAMstack architecture.
3.4 The JAMstack architecture
3.4.1 Understanding the JAM in JAMstack
Markup is pre-built at build time and is often compiled with a static site generator which applies content to templates. These static site generators can source data from content management systems, an application providing an interface for users to create, manage, and modify content. Below is a table listing some of the common static site generators and their templates:4
|Template||Static Site Generators|
|Vue||Nuxt, VuePress, Gridsome|
|Markdown||Eleventy, Slate, Publish|
|Handlebars||HubPress, Hexo, Antora|
3.4.2 High-level view of the JAMstack
Having replaced the web server with a CDN, and the application server with APIs and serverless functions, the JAMstack is essentially a serverless architecture. It's important to note that this does not replace the need for servers entirely, since any custom APIs will continue to rely on servers.
As a result of the decoupling of the frontend and backend, essentially the client, not the server, has now become the orchestrator of dynamic functionality. There is a clear separation of responsibilities for developers. Frontend developers can focus on building their application, calling APIs for data management and other functionality while the maintenance and optimization of those APIs is handled separately.
3.5 Diving into the JAMstack
With the high-level overview established, this section will explain different aspects of the JAMstack architecture in further detail.
3.5.1 Serving static content from a CDN
JAMstack sites serve pre-built, static sites directly from a CDN. The use of CDNs to serve pre-built sites comes with significant advantages, including:
- Improved response time
- Easier scalability
- Reduced surface area for attacks
- Lower maintenance requirements
Improved response time comes from the fact that CDNs have edge locations closer to the end user.5In many cases CDNs can reduce latency by 100’s of ms. Traffic is routed to the nearest edge location, improving the distribution of assets to traffic globally. Relative to using servers, CDNs are also easier to scale due to the ease at which edge locations can be added and removed from a system.
CDNs are also highly reliable. If an edge location goes down, a user will be routed to the next closest location. Additionally, the use of CDNs over web servers reduces the risk of attacks. Web servers are often the target of attacks such as DDoS or hacking attempts. This often requires security precautions and regular maintenance. Regarding maintenance, the use of CDNs allows developers to focus solely on their logic and not infrastructure as CDNs are owned by large corporations which will be responsible for maintaining and securing said CDNs.
3.5.2 Implementing dynamic functionality
In standard web app architectures, application servers are responsible for managing data and handling other business logic. In the JAMstack architecture, services are abstracted into APIs, allowing the client to be the coordinator of such data.
The developer is free to build their own APIs or utilize the huge ecosystem of 3rd-party services that exist today. Using third-party APIs allows developers to save time by implementing functionality already created by others and focus on the core needs of their web app.
Serverless functions, or Functions as a Service (FaaS), can also be used in place of servers to manage other business logic. FaaS are useful when APIs and frontend logic alone is not capable of replacing all business logic once handled by the application server.
3.6 The JAMstack workflow
3.6.1 JAMstack at a minimum
Launching an application utilizing the JAMstack architecture, at its simplest, takes only 3 steps:
However, the JAMstack community has also established additional best practices in regards to deploying and maintaining applications to improve developer experience.
3.6.2 The JAMstack way
These elements are part of what is called “The JAMstack Way”1:
All source code should live in a git repository and webhooks can be utilized to notify the build process to be initiated on each source code update.
When new commits are made to the git repository, a webhook can be sent to a build server which triggers a static site generator to start the build process.
Once pages are built, they should only be deployed if all the pages were built successfully. This is known as an atomic deploy - each deployment is self-contained, and should a build fail, the build process is completely rolled back. There are two major benefits to this approach. First, this ensures that state is always consistent for each deploy, and viewing a build does not depend on other sources. Second, there will be no downtime for the website, allowing users to visit an older site if there are issues with the latest build.
Finally, instant CDN invalidation ensures that the users are served the most updated web pages. Every time there is a new build, either a FaaS or a server will invalidate the CDN of old files, allowing users to view the latest content.
4. Launching a JAMstack app
Having understood the advantages of JAMstack web apps, developers may be interested in launching an app utilizing this architecture for themselves or their team. In this section, we examine certain approaches a team would consider when deploying a JAMstack app.
4.1 Manually provisioning infrastructure
The first approach in launching a JAMstack web app is to manually provision the underlying services responsible for building and deploying the application. For a basic application with core JAMstack functionality, a developer would need to provision a minimum of 5 services as outlined below.
Each service requires multiple steps to provision and must subsequently be configured to interact with other services. This is a time-intensive process that requires in-depth knowledge of the cloud provider and must be completed for each application the developer launches.
4.2 Maintaining the application
After provisioning infrastructure, the developer needs to consider how they will handle updates and deployments of their application. Based on the JAMstack workflow, a build step must be completed every time an update is made to the application’s source code. Once the build is complete, the application needs to be uploaded to a hosting environment and previous content on the CDN must be invalidated.
While this process can be handled manually, developers may be tempted to automate it to avoid repetitive tasks. To do so, the developer would need to implement systems that, at a minimum, can:
- Detect changes and pull code from the repository
- Build the site
- Deploy the built site to a hosting service
- Invalidate previous content on the CDN
The steps involved in doing so are complex, requiring significant knowledge and time to correctly implement.
4.3 JAMstack as a Service (JaaS)
With the difficulty involved in manually provisioning and maintaining such services, most developers may not choose this route to deploy a JAMstack app. Fortunately, there are many solutions out there that handle this for developers. We refer to these services as a JAMstack as a Service (JaaS).
JaaS providers manage the build and deploy process for developers.
In utilizing a JaaS, the developer only needs to concern themselves with updating their source code and committing the update to a git repository. The complexity of the build and deployment processes are abstracted away by the provider and the JAMstack application is delivered to end users via a CDN.
4.4 JaaS providers
Many JaaS providers have come into existence in recent years to tackle the problems outlined above. Below, we outline four of the most prominent providers in the industry:
Every one of the above providers provision underlying architecture and handle the build and deployment processes for developers. They also include various other integrations such as serverless functions, authentication and forms.
However, with the exception of Vercel, these providers are not open-source. When using their services, developers are subject to their set fee structures. In addition, developers do not have any flexibility to adjust the underlying infrastructure used by the provider. As such, any changes will have to be made via the JaaS provider, which is likely to incur an additional cost.
4.5 Why we built Jade
We built Jade for developers who want full control over their infrastructure without having to provision that infrastructure themselves.
Jade is open-source, so developers are free to take the source code and modify it to suit their individual needs. Also, Jade provisions AWS resources for the developer, meaning:
- They are not tied to a set fee structure, but rather are charged based on usage of the underlying AWS resources
- They are free to customize those underlying resources to suit their specific needs
5. Jade Core
At its core, Jade connects public GitHub repositories and AWS services.
Upon initialization, Jade provisions the above 6 underlying AWS services. It configures these underlying AWS services, sets permissions and roles for how the services may interact with each other, and which users may interact with those services. The developer then connects those services to a GitHub repository to automate the build and deployment processes for developers.
While AWS and GitHub are by no means the only suitable providers of cloud services and repository hosting, we chose to utilize them because of their prominence in the industry.
5.1 Build stage
5.1.1 Overview of build stage architecture
Jade utilizes an AWS EC2 instance to handle the build process.
The components of the EC2 instance include:
- Configuration related to the user and the provisioned AWS services
- An Nginx web server set up as a reverse proxy to handle incoming requests
- A node application that contains the logic to handle the build process
- A copy of the most recent source code from the project’s git repository as well as a copy of the most recent build of the application
The components of the Node application running on EC2 include:
- Server.js, which handles routing of incoming requests from Nginx
- Build.js, which gathers the required resources and builds the application
- Logger.js, which logs the results of each build
- Update.js, which sends the built application on to the deployment process
5.1.2 GitHub webhook
The slideshow below details the process from the developer pushing a commit to GitHub to the build process being initiated.
During initialization, Jade provides a URL to the developer to add to their GitHub repository webhook settings so that anytime a commit is pushed to that repository, a webhook is sent to Jade’s EC2 instance and its first stop is the Nginx web server. Nginx ensures that the request is coming in on the correct port and proxies the request to Jade's node server.
The node server ensures the request is sent to the correct route and passes the request to the build logic. Jade then pulls the master branch of the repository from GitHub and checks to see if it has changed since the last build and if so, initiates the build process.
5.1.3 Examining the build process
Once the build process has been initiated, Jade uses the user configuration stored on EC2, which holds information including the build command for the static site generator, to gather the required resources to build the application. The resources gathered includes the source code, which is already pulled from GitHub, and data stored in external sources such as a content management system.
Jade is configured to utilize Gatsby as a static site generator and Contentful as a content management system. There are a multitude of options for both static site generators and content management systems; however, we chose these due to their relative prominence in the industry and the large community and amount of resources around them that exist. Other tools could be utilized with further configuration by the developer, but Jade is built to support these 2 out of the box.
Jade passes the source code to Gatsby, parses it, and then reaches out to Contentful if the source code indicates the project is utilizing it as a CMS. Gatsby compiles the source code and data received from Contentful, minifies the resulting code to save space, transpiles it to code that can be interpreted by all browsers, and bundles the assets into a static web application.
Once the application is built, the application is sent to the deploy process.
5.2 Deploy stage
5.2.1 Overview of deploy stage architecture
The key components involved in deployment are as follows:
An S3 bucket hosts the live build of the application. The bucket has an event associated with it such that whenever a new build is received, a “new object” event fires and triggers an AWS Lambda function that contains logic to invalidate the previous build on CloudFront. This ensures that on every build, the CDN serves only the latest built files to end users.
5.2.2 CDN invalidation
CDN invalidation is an important issue that we ran into when building Jade. Assets distributed via AWS CloudFront expire in 24 hours by default. Therefore, even though Jade’s live build S3 bucket is configured as the origin source for CloudFront, CloudFront will not pull newly built files until 24 hours from when the last build was uploaded. CloudFront has to be notified that older files should be invalidated before it points to the newly built files.
One characteristic of static site generators is that most updated files are versioned by the SSG during the build process. Files are versioned by concatenating a sequence of characters with the developer’s original filename. This versioning is carried out with the exception of certain files that are not meant to be cached, such as
index.html, which is updated to reference these versioned files.
To overcome the invalidation problem, we considered sending invalidations for all files on CloudFront. However, we learned that AWS recommends minimizing the number of CDN invalidations made, as this is a costly operation and only 1,000 invalidations are provided each month for free. Were Jade to invalidate every file on each application update and deployment, developers would likely run into this limit quickly.
We decided to use these characteristics in implementing CDN invalidations:
As seen in the code snippet, Jade invalidates only one file every time a new build is detected: the
index.html file. This is because
index.html references specific versions of files. CloudFront will detect which files are needed based on
index.html and pull these files from S3 when a new version is uploaded, ensuring that the user is always presented with the most recent version of the application.
With the deploy stage complete, this represents the core functionality of Jade.
6. Evolution of Jade
From our research into JaaS providers and our own experience building Jade core, we identified several challenges that developers might face and sought to overcome them. This section introduces the following challenges and how we sought to overcome them:
- Supporting multiple developers
- Launching multiple Jade apps
- Docker for build dependencies
- Atomic deploys
- Staging previews
6.1 Multiple developers
The core architecture allows a single developer to be in charge of the provisioned AWS infrastructure. What if a team of developers, for example Alice and Bob, are eager to work on a JAMstack site together?
At present, Bob does not have the necessary credentials and files to run the Jade command. As a result, his command fails to interact with Alice’s AWS infrastructure.
One option available is for them to share credentials and config files. This may work but is largely an insecure and error-prone approach, given that files may be corrupted or missing due to errors during the sharing process.
To overcome this, the Jade framework creates a Jade IAM group with all the permissions needed. The primary developer then adds secondary developers to this group, and now all team members will be able to perform the commands they need.
6.2 Multiple Jade apps
A team of developers may want to create multiple Jade applications. To understand what problems this may cause, let us closely examine the memory usage of a single EC2 instance:
At present, there is sufficient memory to handle a single application. EC2 will store the build environment, source code, and other artifacts related to this app, which amounts to nearly 1GB of data that is kept on EC2 even for the simplest of sites.
If a developer adds a second app, the amount of memory used may exceed the EC2 capacity. In addition, having multiple build servers on one app makes it difficult to identify the relevant build server and config files for each app.
To overcome this, a developer may choose to vertically scale their EC2 server by upgrading to one with more memory. However, after starting a certain number of Jade apps, the server will again face memory constraints.
As such, when developers launch a new app, Jade instead provisions a dedicated EC2 server.
This not only reduces the likelihood of memory issues but also allows developers to save costs by freezing their servers when not working on a particular app. When they would like to make an edit, they can unfreeze it and continue to develop their site.
Note that developers who make use of this functionality will have to update their GitHub webhook with the new IP address. To facilitate this, developers can run
jade list in the console to see the latest IP addresses of their EC2 instances.
6.3 Docker for build dependencies
In Jade core, developers have to use the version of Node installed in EC2 to build their files. This may be problematic if, say, their web app depends on an older version of Node, which could cause dependency errors or extended build times. Here is an illustrative example of what could happen:
To overcome this, we implemented Docker within EC2 to store build environments and other user configurations. To support Docker, Jade had to be refactored in certain ways. We wrote a Dockerfile that specified the environment settings for a new Docker image. We also set up a folder in EC2 for Docker to build and send files to. Finally, instead of invoking the build directly in EC2, we created a new Node file to manage Docker, giving us direct control over the build process. These changes allowed us to largely preserve our existing method of interacting with other AWS services, notably S3 and DynamoDB.
As of now, Jade’s default Docker setup allows a user to build and deploy Gatsby applications. If desired, developers can edit the
Dockerfile in order to choose which version of Node they want to use. If more configuration for builds is needed, the
dockerBuild.js file can be edited as well. The purpose here is to keep Node environments separate from the EC2 Node version, therefore eliminating any dependency concerns.
6.4 Atomic deploys
As part of The JAMstack Way, developers expect that each deploy is atomic and self-contained. This means that they can view a previous build without having to worry that the state of the source code and data used to build the files has changed. Instead, the static files of older versions can be viewed and analyzed in their entirety.
In Jade core, when a developer pushes a new commit, the automatically built files actually override previously built files. As a result, the user is unable to view previous deploys as desired.
To introduce atomic deploys, Jade utilizes a new bucket to store all historical builds. Doing so lets developers keep track of all builds that have taken place. Each time a build is made, a zip file is sent to the historical builds bucket in addition to the unzipped files being sent to the live bucket. This allows the developer to log into S3 to download and view a historical build.
6.5 Staging previews
As newly built files are automatically distributed to the CDN, developers are not able to preview their sites before it goes live. This means that issues with the site are not caught before they are distributed to end users.
To allow developers to view their site, we introduce the use of a
staging branch where developers can view their website before it gets distributed to the CDN. Developers can use the Git command
git push origin staging, preview their site, and then use the command
git push origin master once they are satisfied with the site.
With these features, developers have to use the AWS console to manage their built files, particularly to preview historical builds and the staging build. This may become complicated and requires developers to be careful about which build they are downloading and viewing.
To facilitate ease of use, users can simply use the
jade admin command to spin up an admin panel. This admin panel uses an Express.js server coupled with React on the frontend to generate a dashboard for users.
The admin panel provides key information about all the Jade apps that the developer is running. Developers are provided with a link to their production site, staging site, and downloads for historical builds. Developers can also view the EC2 IP address should they need to adjust the GitHub webhook or SSH into EC2.
6.7 Final architecture
To demonstrate how Jade has evolved, here is a review of the core functionality that many JaaS providers offer:
With Jade features added to this functionality, our final architecture looks like this:
7. Future work
In the future, we would like to implement the following features:
- Allow users to deploy the live site from the admin panel
- Manage developers' serverless functions
- Allow sourcing from other Git hosting providers (ex. GitLab, BitBucket)
- Automatically set up webhooks for each Jade app
- Support other languages (ex. Ruby/Go) and SSGs (ex. Jekyll/Hugo)
- Overview of the JAMstack
- How a webpage is built
- Historical trends in the usage of client-side programming languages
- A List of Static Site Generators
- Why Use a CDN?
- How Page Speed Affects SEO & Google Rankings
- Overview of Netlify, Vercel and Amplify
- Four levels of AWS infrastructure as code
- Creating EC2 instances with AWS-SDK
- Sending files from EC2 to S3
- Attaching EC2 IAM roles
- Handling errors during SSH connection
- Wrapping SSH connection in promises
- Configuring Nginx on EC2 servers
- Overview of static site generators