Common Problems and Resolution
- When I try to add a new configuration object (vdb/data source, etc), it highlights the box in red, and won’t let me commit it.
A: Each configuration object is stored in its own file, under the config directory, and is based on the object name. As such, allowing overlap between object names between types at this time would result in incorrect configurations, so is not allowed.
- I have traffic flowing through the HDAL, as shown on the dashboard, but nothing is showing on the Analytics screen.
A: In order for Analytics to work, logging must either be enabled in the logging configuration of the VDB, OR a log rule must match the queries passing through the system.
- I have configured the HDAL to act as a proxy or JDBC driver, but I can’t get the cache to initialize per the logs?
A: Please check that the required grid-cache libraries are in the HDAL’s class path.
- We have configured the cache and cache rules, and are getting some cache hits, but the cache hit rate is very low. Why is this?
A: There could be several reasons for this. First, if the “in-trans” flag is not selected on the cache rule, and the application is using transactions all the time, then this could result in very low or even 0% cache hit rates. Second, it could be that the tables are constantly being written to, resulting in cache invalidations happening, preventing any cached content from being returned. This can happen with some libraries designed to use a database as a glorified key-value pair, where all data is stored in one table, and that same table is frequently updated. Third, it may just be that the same queries are not actually getting generated. An example of this could be geolocation queries where the query contains a high precision GPS coordinate for a given user. Finally, this may be as a result of a time synchronization issue between nodes, that will result in an aggressive “backing off” of returning cache results for longer periods after an invalidation. Please make sure that all HCM and HDAL instances are synchronized with NTP or other mechanisms.
- I have an application configured to use MySQL, but many foreign characters are being corrupted when displayed by the application.
A: This is likely as a result of a subtle interaction with the MySQL JDBC driver used by Heimdall, and the configuration of the MySQL server’s default character set. To properly support all UTF8 characters, MySQL needs to be configured to use the character set UTF8MB4, which can be set on a per-table basis. Unfortunately, the MySQL JDBC driver wants to negotiate the character set at the server level, which is often set to UTF8. On startup, the MySQL driver will attempt to autodetect the character set variables, and will generate a warning if there UTF8MB4 is not set properly on the server at the global server level. Please check this output, and correct on the server side if necessary. Unfortunately, there is no way to force the JDBC driver to negotiate to UTF8MB4 as of today, it must auto-negotiate it with the server being set to this default. More information on this behavior can be found at the MySQL developer site.
- When setting the “Auto-tune Cache” option on the VDB, my cache hit rate goes down dramatically.
A: This can happen when using a remote grid-cache, yet many tables have a very low cache hit rate. Since a cache miss requires traversing the network to the remote cache, then the database, this is obviously slower than if the content was retrieved from the database alone. As such, the auto-tune option will avoid caching and retrieving from the cache content that matches this behavior. One benefit of this however is that the cache is READY to receive traffic, and if the database slows down for any reason, you will find that the cache will start offloading more and more traffic. This acts as a safety valve for your database, and can help protect against DDoS attacks and other infrequent behavior.
- When I use the “Test Source” to my MySQL server, it always fails with an error “Access denied for user”.
A: Please check that the IP address of the Heimdall Central Manager is configured to access the user with the same password AND HOST as the application servers. This test is performed from the Central Manager, and not the HDAL instances, so may be coming from an unexpected IP address.
- I am attempting to use the “Table Cache” rule, but it doesn’t appear that the rule is ever working.
A: Heimdall will use a fully qualified table name when matching against this, for example, if your database instance is “test” and the table is “table”, then you will need to specify the table of “test.table” when configuring table caching. In order to debug exactly what table names are being used, and thus need to match on, you can add a log rule with the parameter of “printtables” and value of “true”. Then, check the application or proxy logs after executing the query, and a log entry should display what tables were detected in the query.
- I want to cache a query that doesn’t contain a table, and it doesn’t seem to work, help!
A: By default, if the HDAL can’t extract any tables from a query, such as “SELECT 1”, then it will not cache, even if a wildcard cache rule is in place. In such cases, you need to configure a cache rule for this query pattern, and set a parameter of “tables” with the value of “none”. This has a special meaning to allow caching even when no tables are detected.
- I want to use Memcached as my cache layer, why is it deprecated?
A: One of the features that Heimdall uses to speed up synchronization of invalidation is that of publish/subscribe in Hazelcast and Redis. This allows one instance to push a message that is received by all other instances with very little latency. Memcache and the Jcache interfaces do not provide this functionality at this time, as such, we have to leverage a polling mechanism, with a cache invalidation interval to avoid having to poll too often. This can result in some objects being served for up to half a second after another node has invalidated the table. As a result, while we support this option for those that want it, we would highly suggest using Hazelcast or Redis to vastly improve invalidation synchronization behavior.
- I need to be able to manage configurations using (Chef, Puppet, or other tool).
A: The configuration files for Heimdall are managed in JSON format under the config directory of the HCM install. These files can be pushed from a central location if desired, and the api call “http://<HCM IP>:<port>/config/api/reload” can be used to reload the on-disk configuration. It will return the string “success” if the load completes without error. Authentication is handled by simple HTTP basic authentication, supported by most tools. Once loaded, the configuration data that may have changed will be pushed out to any nodes managed by that HCM install. This technique can also be used to provide a redundant HCM install with failover, by synchronizing the configuration files between nodes.
- We need to be able to archive off our analytics data for an extended period of time, and view it later.
A: The log data we generate is saved in two formats, both in the log directory of the HCM install. First, a batch of up to 500k records is stored in a CSV file. After 500 entries, the log is rolled, starting a new CSV file. This log roll triggers the HCM to process the CSV, and generate a summary JSON file, which is used to actually generate the Analytics page data. If the JSON files are deleted, then the CSV files will all be re-analyzed, so for archival purposes, they are not strictly necessary. If space savings is at a premium however, the JSON files can be archived off independently, and analyzed on another HCM instance. The only requirement here is that the new HCM instance must have a VDB name matching that saved in the JSON fields so the VDB name will be available for analysis selection.
- The volume of query logs we are generating is too large, how do we limit the rate of logs?
A: There are several mechanisms to do this. First, using regular expressions, you can either specify or exclude the query patterns that are not of interest to you. Second, you can perform log sampling, say to log only one out of ten logs. To do this, add the rule parameter of “sample” with a value of say “10” for 1/10th of queries to match, “100” for 1/100th, etc. Alternatively, you can specify the “matchlimit” parameter, with a value being the rate in rule matches per second, i.e. “1000” to only “match” on a log rule a thousand times a second. The first technique helps to preserve the proportion of traffic represented in the logs over time, helping for trend analysis, and the second method works well to act as an “escape valve” in the event query traffic spikes unexpectedly. This will also reduce the CPU overhead of logging.