FYP around whether LLMs actually do think or just say to say i.e., any alignment present b/w their thoughts and language ?
This project is formed on the basis of the folllowing research paper or precisely, tries to build upon it:
https://www.sciencedirect.com/science/article/abs/pii/S1364661324000275
Also, here is a somewhat understandable huggingface repository for complete look of the project alongwith the material uploaded here:
https://huggingface.co/ritish369/fined_tuned_sparql_model
However, the finalised version in its entirity is here on Github.