- Hyperdimensional Computing for Vision Transformer is carefully developed.HD part can bring on-chip learning
capability while the backbone can extend HD to generation settings.
- A ladder-side HD computation part is added to the traditional transformer backbone to add few-shot learning capability to transformers.
- Use the characteristic of HRR-based VSA models to retrieve input data in a generative fashion and avoid the computation cost of traditional method of reading out prototypes in distributed memory system.
- Working on the trade-off between 2D, 2.5D and 3D system for the acceleration of the above model featuring seperate FFN core, attention core and HD core.