Six months ago, I was not sold on AI. Most of my rationale still holds true, but at the same time AI is advancing to the point where being a luddite about it is not realistic as a tech professional.
So to balance the professional need to understand how to use LLMs in software development, but without sending either my data or dollars to one of the AI platforms, I have spent the last week or so working with running different models through Ollama, and integrating them into VS code using continue.dev.
My first lesson was... upgrade my memory. I was running with 32GB RAM, which was enough for smaller models, but it crashed trying to run a 40B model. So I upgraded, and it now works.
FWIW, I'm running a Nvidia 3070 card, which is decent but a couple years old - it can handle running 40B models, but the responses are slow. So I am learning which models deliver what levels of quality, at what speeds, and am adjusting my work accordingly.
My findings so far have led me to only use the chat UX:
What is making it useful for me is to have many models configured, so I can tailor the responses I get to what I need. continue.dev has an easy drop-down in the chat UX, so I can easily target a specific model for each thing I ask, and copy/paste resulting code when appropriate:
I would like to encourage people to find the middle ground between \"AI can do everything - anyone not using it to help with all their tasks is doomed\", and \"AI is all hype, just keep doing what you always have.\"
The truth is that most things life have a healthy middle ground that work better than living at any extremes of opinion. AI is no different. It cannot do everything. It can be a good partner for brainstorming ideas or some basic research. It has accuracy problems and sometimes is so far off-track that what it says is useless, so you cannot rely on it. Yet when you are still hashing through ideas in your mind, it provides a decent sounding board and might help you refine your thoughts.
I don't know if there is a \"correct\" level of us for AI, but I do know that if you either always use or never use it, you are not doing it right.
"}]; const page = "ai"; const title = "Thoughts on AI"; const description = "From someone who considers AI to be a tool, nothing more."; function layoutItems() { let leftBottom = 0; let rightBottom = 0; const timelineItems = document.querySelectorAll(".timeline-item"); for (let i = 0; i < timelineItems.length; i++) { const current = timelineItems[i]; const content = current.querySelector(".content"); if (window.innerWidth < 786) { current.classList.remove("left"); current.classList.remove("right"); current.classList.add("full"); current.style.top = `${leftBottom}px`; leftBottom += content.offsetHeight + 10; } else if (leftBottom <= rightBottom) { current.classList.remove("full"); current.classList.remove("right"); current.classList.add("left"); current.style.top = `${leftBottom}px`; leftBottom += content.offsetHeight + 10; } else { current.classList.remove("full"); current.classList.remove("left"); current.classList.add("right"); current.style.top = `${rightBottom}px`; rightBottom += content.offsetHeight + 10; } } } document.addEventListener("DOMContentLoaded", () => { layoutItems(); window.addEventListener("resize", layoutItems); }); })();This is not a fully baked collection of thoughts that I spent a ton of time crafting. It is just a simple list of why I, as a software product professional, dislike the current state of AI (Artifical Intelligence) in the tech industry:
1) It burns too many resources for the value it offers - We are burning a massive amount of electrical power to run all the AI training and apps. We are doing so at a time when we need to be improving the energy management across our world. If AI could improve its energy efficiency, I'd be more open to it.
2) Technology companies are inappropriately using it as product branding - Tech products matter because they solve problems. The actual tech used to solve the problem is not the selling point of good solutions. No consumer truly cares if the SaaS app they love is coded in React or Angular. Yet tech companies are promoting new features of their products by waving the flag of "AI-Driven!". This is silly. Sell products because of what they do, not because of what tech you use to make them do it. If you have to call out "AI" to make it sell, you didn't really create a compelling feature.
3) We simply don't need it - I've always been a bit of an oddball in the tech world because I question whether it is improving our world. I think time has proven my skeptical instincts correct as we see the negative impact of social media and misinformation campaigns across our society. I believe we need to be asking whether technology improves our lives or not when we develop new products. And I don't see many things coming out of the current AI excitement that meet that criteria of being an overall improvement. Sure, they are clever, and maybe even helpful. But are they better? Are the results better or just faster? Are the people working in jobs that are moving to AI better for it? Are the people who receive the work product created by AI better for it? Or are they just clever shiny new toys? Do we really need such things, especially if we look back to my first point of the energy cost to build them?
I am not saying AI should not exist - it know that it can improve the world when applied correctly. There are use cases where those questions can be answered with an emphatic "Yes, this is better in all ways." My favorite positive example was a hearing aid I head about that can allow you to focus on one person in a crowd and filter out all noise other than their voice. Yeah, sign me up for that one. And find more products like that which solve a real problem.
But most companies putting out AI-driven products do not fall into that category. I'd encourage everyone in this industry to put much more thought into their usage of AI than what I'm seeing today.
Six months ago, I was not sold on AI. Most of my rationale still holds true, but at the same time AI is advancing to the point where being a luddite about it is not realistic as a tech professional.
So to balance the professional need to understand how to use LLMs in software development, but without sending either my data or dollars to one of the AI platforms, I have spent the last week or so working with running different models through Ollama, and integrating them into VS code using continue.dev.
My first lesson was... upgrade my memory. I was running with 32GB RAM, which was enough for smaller models, but it crashed trying to run a 40B model. So I upgraded, and it now works.
FWIW, I'm running a Nvidia 3070 card, which is decent but a couple years old - it can handle running 40B models, but the responses are slow. So I am learning which models deliver what levels of quality, at what speeds, and am adjusting my work accordingly.
My findings so far have led me to only use the chat UX:
What is making it useful for me is to have many models configured, so I can tailor the responses I get to what I need. continue.dev has an easy drop-down in the chat UX, so I can easily target a specific model for each thing I ask, and copy/paste resulting code when appropriate:
I would like to encourage people to find the middle ground between "AI can do everything - anyone not using it to help with all their tasks is doomed", and "AI is all hype, just keep doing what you always have."
The truth is that most things life have a healthy middle ground that work better than living at any extremes of opinion. AI is no different. It cannot do everything. It can be a good partner for brainstorming ideas or some basic research. It has accuracy problems and sometimes is so far off-track that what it says is useless, so you cannot rely on it. Yet when you are still hashing through ideas in your mind, it provides a decent sounding board and might help you refine your thoughts.
I don't know if there is a "correct" level of us for AI, but I do know that if you either always use or never use it, you are not doing it right.