Troubleshooting Apache Kudu
This guide covers basic Apache Kudu troubleshooting information. For more details, see the official Kudu documentation for troubleshooting.
Continue reading:
Issues Starting or Restarting the Master or Tablet Server
Error during hole punch test
- RHEL or CentOS 6.4 or later, patched to kernel version of 2.6.32-358 or later. Unpatched RHEL or CentOS 6.4 does not include a kernel with support for hole punching.
- Ubuntu 14.04 includes version 3.13 of the Linux kernel, which supports hole punching.
- Newer versions of the EXT4 or XFS filesystems support hole punching, but EXT3 does not. Older versions of XFS that do not support hole punching return a EOPNOTSUPP (operation not supported) error. Older versions of either EXT4 or XFS that do not support hole punching cause Kudu to emit an error message such as the following and to
fail to start:
Error during hole punch test. The log block manager requires a filesystem with hole punching support such as ext4 or xfs. On el6, kernel version 2.6.32-358 or newer is required. To run without hole punching (at the cost of some efficiency and scalability), reconfigure Kudu with --block_manager=file. Refer to the Kudu documentation for more details. Raw error message follows.
- Without hole punching support, the log block manager will never delete blocks and progressively occupy even more space on disk, which makes it unsafe to use.
- Workaround - If you can’t use hole punching in your environment, you can still try Kudu. Enable the file block manager instead of the log block manager by adding the --block_manager=file flag to the commands you use to start the master and tablet servers. Note that the file block manager does not scale as well as the log block manager, and should only be used for small-scale deployments.
NTP Clock Synchronization Issues
I0929 10:00:26.570979 21371 master_main.cc:52] Initializing master server... F0929 10:00:26.571107 21371 master_main.cc:53] Check failed: _s.ok() Bad status: Service unavailable: Clock is not synchronized: Error reading clock. Clock considered unsynchronized. Errno: Invalid argument
let_server_main.cc:48] Initializing tablet server... F0929 10:00:26.572041 21370 tablet_server_main.cc:49] Check failed: _s.ok() Bad status: Service unavailable: Clock is not synchronized: Error reading clock. Clock considered unsynchronized. Errno: Success
To resolve such errors, make sure that NTP is installed on each master and tablet server, and that all NTP processes synchronize to the same time source.
-
To install NTP, use the command appropriate for your operating system:
OS Command Debian/Ubuntu
sudo apt-get install ntp
RHEL/CentOS
sudo yum install ntp
-
If NTP is installed but the clock is reported as unsynchronized, Kudu will not start, and will emit a message such as:
F0924 20:24:36.336809 14550 hybrid_clock.cc:191 Couldn't get the current time: Clock unsynchronized. Status: Service unavailable: Error reading clock. Clock considered unsynchronized.
You can monitor clock synchronization status by running the ntptime command. The relevant value is what is reported for maximum error. Note that NTP requires a network connection and may take a few minutes to synchronize the clock. In some cases a spotty network connection may make NTP report the clock as unsynchronized. A common, though temporary, workaround for this is to restart NTP with one of the following commands.
OS Command Debian/Ubuntu
sudo service ntp restart
RHEL/CentOS
sudo /etc/init.d/ntpd restart
-
In addition to the clocks being synchronized, the maximum clock error (not to be mistaken with the estimated error) must be set to a value relevant to your deployment. The default value is 10 seconds, but it can be configured using the --max_clock_sync_error_usec flag.
If NTP is installed and synchronized, but the maximum clock error is too high, you will see a message such as:
Sep 17, 8:13:09.873 PM FATAL hybrid_clock.cc:196 Couldn't get the current time: Clock synchronized, but error: 11130000, is past the maximum allowable error: 10000000
or
Sep 17, 8:32:31.135 PM FATAL tablet_server_main.cc:38 Check failed: _s.ok() Bad status: Service unavailable: Cannot initialize clock: Cannot initialize HybridClock. Clock synchronized but error was too high (11711000 us).
If NTP reports the clock as synchronized, but the maximum error is consistently too high, you can increase the threshold to a higher value by setting the max_clock_sync_error_usec flag. For example, to increase the maximum error to 20 seconds, set the flag as follows: --max_clock_sync_error_usec=20000000.
Reporting Kudu Crashes Using Breakpad
Kudu uses the Google Breakpad library to generate a minidump whenever Kudu experiences a crash. A minidump file contains important debugging information about the process that crashed, including shared libraries loaded and their versions, a list of threads running at the time of the crash, the state of the processor registers and a copy of the stack memory for each thread, and CPU and operating system version information. These minidumps are typically only a few MB in size and are generated even if core dump generation is disabled. Currently, generating minidumps is only possible on Linux deployments.
By default, Kudu stores its minidumps in a subdirectory of the configured glog directory called minidumps. This location can be customized by setting the --minidump_path flag. Kudu will retain only a certain number of minidumps before deleting the older ones, in an effort to avoid filling up the disk with minidump files. The maximum number of minidumps that will be retained can be controlled by setting the --max_minidumps gflag.
Minidumps contain information specific to the binary that created them and are therefore not useful without access to the exact binary that crashed, or a very similar binary.
Kudu developers can access the minidump tools in their development environment because they are installed as part of the Kudu thirdparty build. They can be found in the Kudu development environment under uninstrumented/bin. For example, thirdparty/installed/uninstrumented/bin/minidump-2-core.
If minidumps are enabled, it is possible to force Kudu to create a minidump without killing the process. To do that, send a USR1 signal to the kudu-tserver or kudu-master process. For example:
sudo pkill -USR1 kudu-tserver
Viewing the minidump stack trace with the GNU Debugger
Although a minidump contains no heap information, it does contain thread and stack information. You can convert a minidump to a core file to view it with GDB.
To convert the minidump (.dmp file) to a core file:
minidump-2-core -o 02cb4a97-ee37-6454-73a9d9cb-590c7dde.core \ 02cb4a97-ee37-6454-73a9d9cb-590c7dde.dmp
To view the core file with GDB (on a parcel deployment):
gdb /opt/cloudera/parcels/KUDU/lib/kudu/sbin-release/kudu-master \ -s /opt/cloudera/parcels/KUDU/lib/debug/usr/lib/kudu/sbin-release/kudu-master.debug \ 02cb4a97-ee37-6454-73a9d9cb-590c7dde.core
For more information, see Getting started with Breakpad and Chrome developer tips for minidump file debugging.
Troubleshooting Performance Issues
Kudu Tracing
The Kudu master and tablet server daemons include built-in support for tracing based on the open source Chromium Tracing framework. You can use tracing to diagnose latency issues or other problems on Kudu servers.
Accessing the Tracing Web Interface
The tracing interface is part of the embedded web server in each of the Kudu daemons, and can be accessed using a web browser. Note that while the interface has been known to work in recent versions of Google Chrome, other browsers may not work as expected.
Daemon | URL |
---|---|
Tablet Server |
<tablet-server-1.example.com>:8050/tracing.html |
Master |
<master-1.example.com>:8051/tracing.html |
Saving Traces
After you have collected traces, you can save these traces as JSON files by clicking Save. To load and analyze a saved JSON file, click Load and choose the file.
RPC Timeout Traces
If client applications are experiencing RPC timeouts, the Kudu tablet server WARNING level logs should contain a log entry which includes an RPC-level trace. For example:
W0922 00:56:52.313848 10858 inbound_call.cc:193] Call kudu.consensus.ConsensusService.UpdateConsensus from 192.168.1.102:43499 (request call id 3555909) took 1464ms (client timeout 1000). W0922 00:56:52.314888 10858 inbound_call.cc:197] Trace: 0922 00:56:50.849505 (+ 0us) service_pool.cc:97] Inserting onto call queue 0922 00:56:50.849527 (+ 22us) service_pool.cc:158] Handling call 0922 00:56:50.849574 (+ 47us) raft_consensus.cc:1008] Updating replica for 2 ops 0922 00:56:50.849628 (+ 54us) raft_consensus.cc:1050] Early marking committed up to term: 8 index: 880241 0922 00:56:50.849968 (+ 340us) raft_consensus.cc:1056] Triggering prepare for 2 ops 0922 00:56:50.850119 (+ 151us) log.cc:420] Serialized 1555 byte log entry 0922 00:56:50.850213 (+ 94us) raft_consensus.cc:1131] Marking committed up to term: 8 index: 880241 0922 00:56:50.850218 (+ 5us) raft_consensus.cc:1148] Updating last received op as term: 8 index: 880243 0922 00:56:50.850219 (+ 1us) raft_consensus.cc:1195] Filling consensus response to leader. 0922 00:56:50.850221 (+ 2us) raft_consensus.cc:1169] Waiting on the replicates to finish logging 0922 00:56:52.313763 (+1463542us) raft_consensus.cc:1182] finished 0922 00:56:52.313764 (+ 1us) raft_consensus.cc:1190] UpdateReplicas() finished 0922 00:56:52.313788 (+ 24us) inbound_call.cc:114] Queueing success response
These traces can indicate which part of the request was slow. Make sure you include them when filing bug reports related to RPC latency outliers.
Kernel Stack Watchdog Traces
Each Kudu server process has a background thread called the Stack Watchdog, which monitors other threads in the server in case they are blocked for longer-than-expected periods of time. These traces can indicate operating system issues or bottle-necked storage.
When the watchdog thread identifies a case of thread blockage, it logs an entry in the WARNING log as follows:
W0921 23:51:54.306350 10912 kernel_stack_watchdog.cc:111] Thread 10937 stuck at /data/kudu/consensus/log.cc:505 for 537ms: Kernel stack: [<ffffffffa00b209d>] do_get_write_access+0x29d/0x520 [jbd2] [<ffffffffa00b2471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2] [<ffffffffa00fe6d8>] __ext4_journal_get_write_access+0x38/0x80 [ext4] [<ffffffffa00d9b23>] ext4_reserve_inode_write+0x73/0xa0 [ext4] [<ffffffffa00d9b9c>] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4] [<ffffffffa00d9e90>] ext4_dirty_inode+0x40/0x60 [ext4] [<ffffffff811ac48b>] __mark_inode_dirty+0x3b/0x160 [<ffffffff8119c742>] file_update_time+0xf2/0x170 [<ffffffff8111c1e0>] __generic_file_aio_write+0x230/0x490 [<ffffffff8111c4c8>] generic_file_aio_write+0x88/0x100 [<ffffffffa00d3fb1>] ext4_file_write+0x61/0x1e0 [ext4] [<ffffffff81180f5b>] do_sync_readv_writev+0xfb/0x140 [<ffffffff81181ee6>] do_readv_writev+0xd6/0x1f0 [<ffffffff81182046>] vfs_writev+0x46/0x60 [<ffffffff81182102>] sys_pwritev+0xa2/0xc0 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff User stack: @ 0x3a1ace10c4 (unknown) @ 0x1262103 (unknown) @ 0x12622d4 (unknown) @ 0x12603df (unknown) @ 0x8e7bfb (unknown) @ 0x8f478b (unknown) @ 0x8f55db (unknown) @ 0x12a7b6f (unknown) @ 0x3a1b007851 (unknown) @ 0x3a1ace894d (unknown) @ (nil) (unknown)
These traces can be useful for diagnosing root-cause latency issues in Kudu especially when they are caused by underlying systems such as disk controllers or file systems.
Memory Limits
Kudu has a hard and soft memory limit. The hard memory limit is the maximum amount a Kudu process is allowed to use, and is controlled by the --memory_limit_hard_bytes flag. The soft memory limit is a percentage of the hard memory limit, controlled by the flag memory_limit_soft_percentage and with a default value of 80%, that determines the amount of memory a process may use before it will start rejecting some write operations.
If the logs or RPC traces contain messages such as the following example, then Kudu is rejecting writes due to memory back pressure. This may result in write timeouts.
Service unavailable: Soft memory limit exceeded (at 96.35% of capacity)
There are several ways to relieve the memory pressure on Kudu:
-
If the host has more memory available for Kudu, increase --memory_limit_hard_bytes.
-
Increase the rate at which Kudu can flush writes from memory to disk by increasing the number of disks or increasing the number of maintenance manager threads --maintenance_manager_num_threads. Generally, the recommended ratio of maintenance manager threads to data directories is 1:3.
-
Reduce the volume of writes flowing to Kudu on the application side.
Finally, check the value of the --block_cache_capacity_mb setting. This setting determines the maximum size of Kudu's block cache. While a higher value can help with read and write performance, setting it too high as a percentage of the --memory_limit_hard_bytes setting is harmful. Do not raise --block_cache_capacity_mb above --memory_pressure_percentage (default 60%) of --memory_limit_hard_bytes, as this will cause Kudu to flush aggressively even if write throughput is low. The recommended value for --block_cache_capacity_mb is below the following:
(50% * --memory_pressure_percentage) * --memory_limit_hard_bytes
With the defaults, this means the --block_cache_capacity_mb should not exceed 30% of --memory_limit_hard_bytes.
Disk Issues
When Kudu starts, it checks each configured data directory, expecting either for all to be initialized or for all to be empty. If a server fails to start with a log message such as the following, then this precondition check has failed.
Check failed: _s.ok() Bad status: Already present: Could not create new FS layout: FSManager root is not empty: /data0/kudu/data
This could be because Kudu was configured with non-empty data directories on first startup, or because a previously-running, healthy Kudu process was restarted and at least one data directory was deleted or is somehow corrupted, perhaps because of a disk error. If it is the latter, refer Changing Directory Configuration.
Slow DNS Lookups and nscd
If the logs contain messages about slow name resolution like below, it may help to run nscd, the name server cache daemon.
W0926 11:19:01.339553 27231 net_util.cc:129] Time spent resolving address for kudu-tserver.example.com: real 4.647s user 0.000s sys 0.000s
Consult your operating system's documentation for how to install and enable nscd.
Usability Issues
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
You will encounter this exception when you try to access a Kudu table using Hive. This is not a case of a missing jar, but simply that Impala stores Kudu metadata in Hive in a format that is unreadable to other tools, including Hive itself. and Spark. Currently, there is no workaround for Hive users. Spark users can work around this by creating temporary tables.
Runtime Error: Could not create thread: Resource temporarily unavailable (error 11)
You will encounter this error when Kudu is unable to create more threads, usually on versions older than CDH 5.15 / Kudu 1.7. It happens on tablet servers, and is a sign that the tablet server hosts too many tablet replicas.
To fix the issue, you can raise the nproc ulimit as detailed in the documentation for your operating system or distribution.
However, the better solution is to reduce the number of replicas on the tablet server. This may involve rethinking the table's partitioning schema. For the recommended limits on number of replicas per tablet server, see the known issues and scaling limitations documentation.
Tombstoned or STOPPED tablet replicas
You may notice some replicas on a tablet server are in a STOPPED state and remain on the server indefinitely. These replicas are tombstones. A tombstone indicates that the tablet server once held a bona fide replica of its tablet. For example, in case a tablet server goes down and its replicas are re-replicated elsewhere, if the tablet server rejoins the cluster, its replicas will become tombstones. A tombstone will remain until the table it belongs to is deleted, or a new replica of the same tablet is placed on the tablet server. A count of tombstoned replicas and details of each one are available on the /tablets page of the tablet server web UI. The Raft consensus algorithm that Kudu uses for replication requires tombstones for correctness in certain rare situations. They consume minimal resources and hold no data. They must not be deleted.
<< Apache Kudu Background Maintenance Tasks | ©2016 Cloudera, Inc. All rights reserved | More Resources for Apache Kudu >> |
Terms and Conditions Privacy Policy |