The compiler should take as input a python, C++ file and output a circuit descruption file. Let's refer to circuit file as CDF (circuit description file).
**Following is the flow I have in mind**
**Python -> MLIR**
- The front-end language does not necessarily be Python. But I suspect python will be easier.
- Compile down source code (ie program) to MLIR. This is required for next step because HEIR only accepts MLIR as input.
**MLIR -> BOOLEAN GATES**
- "--yosys-optimizer" pass takes standard MLIR and converts it to boolean circuit using Yosys. MLIR code is first fed to Yosys to booleanize and then is optimised, using ABC.
- The boolean circuit can contain 3-way LUTs or just 2-way boolean gates.
- This can be cnfigured via a flag
**BOOLEAN GATES -> CDL**
- Take circuit description of boolean gates, optimise, and output CDL. CDL describes circuit only in terms of lookup tables.
- How to optimise
- Greedily figure out ways to club multiple gates into a single "n" bit lookup table.
- References
- Autohog: https://eprint.iacr.org/2024/1250.pdf
- https://eprint.iacr.org/2024/1204.pdf
- Figure out the cost with different values of "n"
- Cost = PBS cost * no. of gates
- Question: Should paralleism be counted in the cost?
- It depends on how parallism varies across different values of "n"
- Can delay calculation of cost while taking parallelism into account for later.
- Select "n" with lowest cost
- Figure out parallelising lookups across multiple cores
- Output the description of the lookup tables (if needed, parallelism as well)
**Additional notes**
- For the first end-to-end prototype (ie output CDL) we can stick with 3-way LUT and hand-wave over the lookup optimisation path. It's easier because HEIR outputs 3-way lookup table.
--------