Improves application performance for Amazon RDS, Aurora, and Redshift. No code changes required. Great for microservice, container, and legacy application environments.  Check out our use cases:

Amazon ElastiCache

Amazon ElastiCache

Amazon RDS / Aurora

Amazon Aurora/RDS

Amazon Redshift

Amazon Redshift

Heimdall for ElastiCache

Automated Query Caching: Heimdall’s intelligent auto-caching can be executed in the application tier, removing network latency to the database. The Heimdall distributed architecture is scalable, as local caches act as one cache cluster. Result sets are cached by two tiers: 1) Local memory and 2) Amazon ElastiCache, and are automatically invalidated upon writes to the table. Best of all, Heimdall deployment requires zero code changes.

Batch Processing

Heimdall Data improves DML performance over 1000x! We batch singleton INSERTs against a table and put them under a single transaction. 

  • Ideal use case:  INSERT a large amount of data at once on a thread. Heimdall can process it all at once much faster than if individual queries outside of a transaction were completed. See below demo video.


  • Not so ideal use case: Concurrent writes and reads against the same table, on the same thread will not be beneficial with this solution, as everything will just block until the DML operation is completed anyway.

Watch the demo video below on how Heimdall’s batching of singleton INSERTs improved performance 1000x!


Video Time Topic Covered
0:00 – 7:29   Batch Processing
7:30 – 12:24 Automated Caching 
12:25 -14.56 Connection Pooling / Multiplexing  
14:57 -25.40 Q&A






Hybrid Transactional Analytical Processing

Companies deploy Amazon Redshift for large-scale MPP (Massively Parallel Processing) data analytics. These systems apply parallel compute resources to answer queries quickly. However, there are still performance challenges:

     –  High latency: Distributed queries require coordination of multiple nodes to generate an answer

     – No materialized views: If a query result is used for future queries, additional work is required to preserve the result or repeated calls are needed to the same base query

     – Frequent queries slowing Redshift down processing

Fast materialized views are very important in analytics environments.  When reports are generated, a subset of data is pulled from the back-end data store. Heimdall provides the following performance enhancement:

     –  Queries against materialized views are routed to an alternate database (e.g. Amazon Aurora Postgres), acting on behalf of Amazon Redshift. Heimdall intelligently routes Analytics and OLTP traffic to the appropriate data source for optimal performance.

     – Configurable triggers are used to update the data in the view, either due to updates against the back-end data-store, timing, or other conditions.  These triggers are executed by the Heimdall database proxy and require no changes to the application or Redshiift to implement. Materialized views can then be used and load distributed to a Postgres server for repeated queries against a data-set that has been extracted from the analytics database (e.g. dashboards). The net result is faster reports and increased Redshift scale.

Automated Persistent Connection Failover

Heimdall detects database health and performs a failover to the standby. But doesn’t Amazon RDS/Aurora already support this? How is Heimdall different?

Heimdall’s failover takes an application-centric approach: Upon a failure, Heimdall queues up the connection and transparently creates a new connection at the backend to the standby instance. This greatly reduces and/or eliminates application errors and exceptions. Hence, failover is transparent to the application and user; not so, with a DNS or IP based failover solution.