Skip to content

Group 7: Energy Efficiency of Quantized vs Full-Precision LLM Inference#169

Open
AmyTUD wants to merge 1 commit intoluiscruz:mainfrom
AmyTUD:main
Open

Group 7: Energy Efficiency of Quantized vs Full-Precision LLM Inference#169
AmyTUD wants to merge 1 commit intoluiscruz:mainfrom
AmyTUD:main

Conversation

@AmyTUD
Copy link

@AmyTUD AmyTUD commented Feb 12, 2026

Use only this template to open a pull request for project 1: Measuring Energy Consumption.
Use the title below for your pull request title.

Group 7: Energy Efficiency of Quantized vs Full-Precision LLM Inference

Make sure to fill out the information under each of the headers.
Open the pull request from a repository + branch everyone in your group has access to, and use that branch to contiuously update your work throughout the project weeks.

Group number on Brightspace:

7

Group members (only names, leave out student numbers):

(You will have been assigned a group of 4 members on Brightspace)
Ceylin Ece, Georgios Markozanis, Kunal Narwani, Amy van der Meijden

Agreed communication channel within the group:

WhatsApp

Did you manage to contact all group members?:

Yes

Your topic idea for Project 1:

(We will review the topics and comment on the PRs whether the projects are appropriate or need adjustment. Don't hesitate to come up to us after the lectures / in the labs to discuss project ideas)

Energy Efficiency of Quantized vs Full-Precision LLM Inference

Large Language Models are energy-intensive, but quantization techniques promise to reduce their computational demands. This project compares the energy consumption of running identical prompts through small LLMs (Llama 3.2 1-3B) in both full-precision (fp16) and 4-bit quantized (GGUF) formats. We will measure energy consumption, throughput, and quality trade-offs to provide empirical data for sustainable AI deployment decisions.

Filename of your Project 1 blog post in p1_measuring_software/ (contributed in this pull request):

(Fill out the yaml header fitting to your group)
g7_llm_quantization.md

Did you succeed to build the website locally and look at your blogpost?

yes

@lacinoire
Copy link
Collaborator

Dear Group 7,
Please use the template for the pull request for P1 :) You can find it here and edit your description: https://github.com/luiscruz/course_sustainableSE/blob/main/.github/PULL_REQUEST_TEMPLATE/p1_template.md

@AmyTUD
Copy link
Author

AmyTUD commented Feb 13, 2026

Dear Group 7, Please use the template for the pull request for P1 :) You can find it here and edit your description: https://github.com/luiscruz/course_sustainableSE/blob/main/.github/PULL_REQUEST_TEMPLATE/p1_template.md

Is it okay like this?

@lacinoire
Copy link
Collaborator

yes @AmyTUD thank you! 🙏🏼

@lacinoire lacinoire changed the title Group 7 Initial commit Group 7: Energy Efficiency of Quantized vs Full-Precision LLM Inference Feb 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants