RT Journal Article SR Electronic T1 Hardware-Efficient Compression of Neural Multi-Unit Activity Using Machine Learning Selected Static Huffman Encoders JF bioRxiv FD Cold Spring Harbor Laboratory SP 2022.03.25.485863 DO 10.1101/2022.03.25.485863 A1 Oscar W. Savolainen A1 Zheng Zhang A1 Peilong Feng A1 Timothy G. Constandinou YR 2022 UL http://biorxiv.org/content/early/2022/03/28/2022.03.25.485863.abstract AB Objective Recent advances in intracortical brain machine interfaces (iBMIs) have demonstrated the feasibility of using our thoughts; by sensing and decoding neural activity, for communication and cursor control tasks. It is essential that any invasive device is completely wireless so as to remove percutaneous connections and the associated infection risks. However, wireless communication consumes significant power and there are strict heating limits in cortical tissue. Most iBMIs use Multi Unit Activity (MUA) processing, however the required bandwidth can be excessive for large channel counts in mm or submm scale implants. As such, some form of data compression for MUA iBMIs is desirable.Approach We used a Machine Learning approach to select static Huffman encoders that worked together, and investigated a broad range of resulting compression systems. They were implemented in reconfigurable hardware and their power consumption, resource utilization and compression performance measured.Main Results Our design results identified a specific system that provided top performance. We tested it on data from 3 datasets, and found that, with less than 1% behavioural decoding performance reduction from peak, the communication bandwidth was reduced from 1 kb/s/channel to approximately 27 bits/s/channel, using only a Look-Up Table and a 50 ms temporal resolution for threshold crossings. Relative to raw broadband data, this is a compression ratio of 1700-15,000 × and is over an order of magnitude higher than has achieved before. Assuming 20 nJ per communicated bit, the total compression and communication power was between 1.37 and 1.52 μW/channel, occupying 246 logic cells and 4 kbit RAM supporting up to 512-channels.Significance We show that MUA data can be significantly compressed in a hardware efficient manner, ‘out of the box’ with no calibration necessary. This can significantly reduce onimplant power consumption and enable much larger channel counts in WI-BMIs. All results, code and hardware designs have been made publicly available.Competing Interest StatementThe authors have declared no competing interest.