Getting work done with AI
I like to give it a go to the Advent of Code programming contest problems. They are usually funny and entertaining, sometimes too entertaining. But this year, I was busy with other matters, and I could not see much of it, so it dawned on me that perhaps I could use that as an excuse to give it a go to some of the newest LLMs that some people were raving about. Coping and pasting the text of an AoC problem did not require much attention, so armed with LMStudio software and a couple of Qwen2.5 LLMs, I started testing. The 8B model dashes (at around 24 tokens/sec on my PC equipped with an Nvidia RTX 4060 Ti with 16 GB of VRAM). Unfortunately, the output is usually not good enough. However, when using the 32B LLM, the results, though slow at around 2.5 tokens/sec, are usually correct and produce Python source code that runs perfectly and delivers the proper solution, at least for the first question of each problem, in just a few minutes. Even with that slow generation rate, th...