Why Ctrl+; Doesn't Work in Your Terminal and How to Fix It
Recently, I needed to map a short keybinding in Neovim, but I had trouble finding a key combination that wasn’t already taken.
I had the idea of using Ctrl+; , but after setting a
mapping for <C-;> in my Neovim config, it didn’t work as expected! In fact, it
didn’t register at all. The reason why lies deep in the history of ASCII and
terminal emulators.
Understanding Terminal Input#
Ctrl key combinations behave in unique ways within the terminal. Standard keys like Tab and Enter have Ctrl variants that trigger the exact same behavior.
If you are on Linux (or WSL), you can verify this by running the showkey -a
command in your terminal, which displays the raw keycodes sent by your keyboard
(in decimal, octal, and hexadecimal). If you press either Enter or
Ctrl+m, you can see that they both send the same keycode:
^M 13 0015 0x0d
When I tried with Ctrl+;, I found the result was the same as just pressing ; alone:
; 59 0073 0x3b
No wonder my Neovim mapping didn’t work. But why does this happen?
The Legacy of ASCII#
To understand why, we need to look back at how terminals historically handled input. Early terminals used the standard 7-bit ASCII encoding scheme. When the Ctrl modifier was held, the value sent would be the ASCII value of the uppercase letter with the top (most significant) bit cleared1 (effectively subtracting 64 from the decimal value).
Let’s break down the earlier example of Ctrl+m and Enter:
- Capital
Mhas an ASCII value of1001101(decimal 77). - Holding Ctrl clears the top bit, resulting in
0001101(decimal 13), - Checking the ASCII table, decimal 13 corresponds to Carriage Return, which is exactly what the Enter key sends!
Char: 'M' (Decimal 77)
Binary: 1 0 0 1 1 0 1
│
[Ctrl] x <-- Clears top bit
↓
Result: 0 0 0 1 1 0 1
Decimal: 13 (Carriage Return)
With the case of Ctrl+;, the ASCII value of ; is
0111011 (decimal 59). Since the top bit is already 0, holding
Ctrl has no effect and the key sent is still just ;.
After further testing, I found that different terminals have different behaviour
for these “out of bounds” Ctrl combinations. WezTerm, for instance,
clears the top two bits from the ASCII value so sends
Ctrl+; as 0011011 (decimal 27, the same as
Esc or Ctrl+[). Alacritty, on the other hand,
sends nothing at all. The behavior across different terminal emulators is
inconsistent and non-standardized!
This historical baggage creates a problem for modern use cases. You may have tried to map Ctrl+i in your terminal-based text editor, only to find that the Tab key gets remapped as well, or tried to map Ctrl+/ to find that the actual keycode is Ctrl+_ instead.
Modernizing Input with the Kitty Keyboard Protocol#
The solution to this 1970s problem is to stop relying on ASCII control codes for key combinations. The kitty keyboard protocol, created around 2022 based on initial work in fixterms, defines a method to send keycodes using CSI u format. In fact, the kitty keyboard protocol defines additional functionality such as handling other keyboard events like key releases, but for our purposes, we will focus on sending keycodes.
In addition to kitty, many other modern
terminal emulators now support this protocol out of the box, such as Alacritty,
WezTerm, and iTerm2. If you use a terminal emulator that supports this protocol,
you’re in luck. There’s no manual setup required, apart from maybe needing to
set a enable_kitty_keyboard = true configuration option in your settings. But
if you are stuck on a setup that doesn’t support it natively, you have to get
your hands dirty.
Calculating and Sending CSI u Codes#
Unfortunately for me, my workflow relies on WSL through Windows Terminal, which does not natively support the kitty keyboard protocol. I had to manually send the specific CSI u codes for each key combination I wanted to use.
The CSI u format is as follows:
CSI<codepoint>;<modifiers>u
Where:
-
CSIis the Control Sequence Introducer, represented by the escape code\033[(Esc followed by [). Note: The Esc character has several equivalent representations, depending on the language. You may see it as\033(octal),\x1b(hex),\u001b(Unicode),^[, or\e. -
<codepoint>is the Unicode value of the key to send, as a decimal number. This must be the codepoint of the lowercase (unshifted) character. -
;is a literal semicolon separator. -
<modifiers>is an integer representing the modifier keys held down. This value is calculated as 1 plus the sum of the individual modifier values2.Key Value Shift 1 Alt 2 Ctrl 4 For example, the modifier value for Ctrl+Shift would be
1 + 1 + 4 = 6. -
uis the literal character u indicating the end of the sequence.
Now to send the Ctrl+; that I originally wanted: the
<codepoint> of ; is 59, and the <modifiers> value for Ctrl is
1 + 4 = 5. Therefore, the full CSI u sequence is \033[59;5u.
To set this keybinding to Windows Terminal, I had to add the following to my
settings.json file under the actions array:
{
"command": {
"action": "sendInput",
"input": "\u001b[59;5u"
},
"keys": "Ctrl+;"
}
(Note: You might get a schema warning about keys, but when you save the file,
it will automatically get converted to the correct format.)
With this done, I tried the <C-;> mapping in Neovim again and it worked
perfectly. But when I ran showkey -a again to verify the keycode sent, I was
baffled. It still showed the same output as before:
; 59 0073 0x3b
Although the new keybinding works correctly, evidenced by Neovim, showkey -a
did not register it! What was going on here?
The Handshake#
The missing link was my terminal multiplexer, Zellij. If I
ran showkey -a directly in the terminal without Zellij, I could see the
correct keycode being sent (I’ve added arrows on the right to indicate which
bytes correspond to which part of the CSI u sequence):
^[[59;5u 27 0033 0x1b <-- ^[ (ESC)
91 0133 0x5b <-- [
53 0065 0x35 <-- 5
57 0071 0x39 <-- 9
59 0073 0x3b <-- ;
53 0065 0x35 <-- 5
117 0165 0x75 <-- u
When you use a terminal multiplexer such as Zellij or Tmux, it sits in between your actual terminal emulator and the application running inside.
When I pressed Ctrl+;, the following chain of events happened:
- Windows Terminal sent the CSI u sequence (
\033[59;5u) to Zellij. - Zellij recognized the sequence and had to decide what to do with it.
- Zellij would only forward the sequence to the application if the application
explicitly asked for an extended keyboard protocol. Otherwise, it would
downgrade the sequence back to standard ASCII (in this case, just
;).
This explains why Neovim worked but showkey -a did not. On startup, Neovim
sends a query sequence to the terminal: CSI ? u, which asks “Do you support
the kitty keyboard protocol?” Zellij sees this, responds with its capabilities,
and recognizes that Neovim supports the protocol. As a result of this handshake,
Zellij knows to forward any CSI u sequences to Neovim.
showkey, on the other hand, being an older utility, does not send this query.
Therefore, Zellij assumes it does not support the protocol and downgrades the
sequence back to safe ASCII.
For this “clever” behavior to work, the terminal multiplexer needs to support
the kitty keyboard protocol. If it does not, then it would naively forward the
CSI u sequences to the application, which may cause issues if the application
does not understand them. For Zellij, you need to be on version 0.41 or later,
and make sure support_kitty_keyboard_protocol is enabled (which it should be
by default). For tmux, you can follow the docs
here and check
this post.
If you want an alternative to showkey -a that supports the kitty keyboard
protocol, you can use the kitten show-key -m kitty command that comes with the
kitty cli tools.
Manually Requesting the Protocol#
If you use a terminal emulator that has native support for the kitty keyboard
protocol, it’s likely that your terminal also would require performing the
handshake with the application to enable the protocol. Therefore, you may also
have found that showkey -a does not work even without using a terminal
multiplexer.
This handshake can be done over stdin, so can be manually enabled. The CSI code
CSI > 1 u tells the terminal to enable the extended key codes. In a standard
shell, you can send this code using printf:
printf '\033[>1u'
Here is what is happening in that sequence:
\033[is the Control Sequence Introducer (CSI) as before.>indicates this is a distinct progressive enhancement command.1is the flag that we want to enable. In this case, to disambiguate escape codes.uindicates the end of the sequence.
After running this command, showkey -a should now correctly display the CSI u
sequences being sent.
If you find that Neovim doesn’t accept the CSI u sequences, check your
$TERM environment variable
first. If that still doesn’t work, you can force Neovim to send the CSI code to
enable the protocol by adding the following line to your init.lua:
vim.cmd([[call chansend(v:stderr, "\x1b[>1u")]])
Summary#
The terminal is a modern interface built on 50-year-old foundations. While legacy ASCII behaviour served us well for decades, it limits our ability to use more complex keybindings in modern workflows.
By understanding CSI u codes used by the kitty keyboard protocol, we can unlock the full range of our keyboards within the terminal. Next time you want to bind your Ctrl+i differently to Tab, you now know how to do it!
Resources#
- Read about Device Attributes which are used to query terminal capabilities.
- The kitty keyboard protocol specifies much more than extended key combinations.
- Check how Neovim queries for terminal capabilities.
- fixterms is some earlier work on using CSI u codes to fix Ctrl keybindings.
- This StackOverflow post started me on this journey.