Can AI help me create software without writing a single line of code?
I’ve been writing code since I can remember. I started with Basic and Assembler on a Commodore C64, saw Turbo Pascal’s rise and fall, and created Windows 3.11 apps with Ansi C while at HTBLuVA St. Pölten, an Austrian engineering school. Professionally, I’ve coded in C++, Java, Lotus/IBM/HCL Domino using Java, LotusScript, and JavaScript, and have focused on Node.JS and JavaScript for years, along with extensive use of HTML and CSS. I’m not the type who memorizes every command; I prefer inline help, type ahead, and IntelliSense.
A few years ago, I shifted to managing cybersecurity and data protection at panagenda, but I still code occasionally for fun or work support. With AI’s growth across industries, I often hear claims that “AI will end software developer jobs.” It makes me wonder if developers are becoming obsolete.
TL;DR (the short version)
If you’re not interested in reading the whole blog post then this is what I took away from trying out what you can develop in a couple of hours using only AI:
- AI can generate code snippets, prove concepts, and offer solutions, significantly enhancing the software development process.
- Comprehensive product development still requires human effort in brainstorming, planning, developing, testing, and supporting.
- AI will not replace software developers but will change the focus to asking the right questions and phrasing the right prompts.
- Developers can focus on creative ideas instead of repetitive coding tasks, leading to more imaginative and impactful projects.
- The future of software development looks bright with AI, making app development faster and more efficient.
The Idea
I am still a gamer, a passion that ignited my interest in computers in the early 1980s. I still assemble my own PCs and love to tinker with system information toys like AIDA64 (AIDA64) or Rainmeter (Rainmeter). Specially the visualization parts, which shows stuff like CPU or GPU Load, board temperature or memory usage on your screen, as futuristic dashboard in the background of your desktop. And currently there is even a trend to have displays inside your computer case showing hardware information through the transparent case cover.
The idea, writing something similar, that also gives me the freedom to use my knowledge in HTML, JavaScript and CSS was in my head for a long time. But I never found the time to do the required research on some basic requirements to create such software.
Now, with GitHub Copilot available directly within Visual Studio Code, and some free days over the holidays, I thought I’d give it a shot. This should be my project to check if developers are already obsolete, and at the same time, create a piece of software I always wanted.
The rules
- The first rule I set up for myself: Do not touch the code. Everything should be written by AI. No exceptions. If something cannot be solved, this would end the experiment.
- The second rule I set was: The main code should be written in C#. As I had never coded in C# before, I thought it would be a good learning opportunity. I trust the AI code completely and hope to learn something new.
The Setup
My setup was as simple as it can be. Visual Studio Code as IDE, GitHub Copilot (v 0.23.2) as the AI assistant and “GPT 4o” as assistants of choice. I created a simple C# project with net9.0-windows as target framework. That’s it.
Let’s start “imagining”, part 1: C#
I have some experience when creating AI prompts, so my start prompt was quite a long thing. I wanted a
“…software solution that displays a borderless full screen window as a background wallpaper similar to applications like AIDA64, Wallpaper Engine or Lively Wallpaper. The window content should be a full screen web browser showing the content of a html file…”.
Honestly, I didn’t expect much, but I was proven wrong. The assistant nailed almost everything I asked for and explained how it generated the code and why it picked WebView2 as the embedded browser engine. It even gave me some “dotnet add…” commands to import important libraries. Excited to see the results, I ran the dotnet commands and started the project. It launched perfectly, but instead of my window being a wallpaper, it was just a regular foreground window. It was borderless and full screen, but not exactly a wallpaper. Still, I was impressed.
Fixing things
I tend to write my AI prompts with “emotions”. Probably it’s a tick. I know, it is not necessary, but… I still do it. So, my next steps of the conversation looked like this:
Me: “Ah, the full screen browser is not behaving like a wallpaper”
AI: “I see, let me fix this for you…”
Running the code
Me: “No, sorry, nothing changed”
AI: “I see, let me fix this…”
Running the code
Me: “Okay, now its starting behind open windows, but it’s still hiding my taskbar and desktop icons”
AI: “I see, let me fix this…”
This happened about 20 to 30 times, and I thought, “Gotcha!” But I wasn’t ready to quit, so I tried something else.
Me: “Okay. This does not work. I want the window to work like a wallpaper, similar to Lively Wallpaper or Wallpaper Engine. Do you find any references how they do it?”
AI: “To achieve an effect similar to Lively Wallpaper, you could implement the “ProgMan Trick” or “WorkerW Trick” …
THE WHAT?
I looked up the terms, but did not find any references to a “trick”, but some more research hinted to some rather old stack overflow discussion (Drawing on the desktop background as wallpaper replacement (Windows/C#) – Stack Overflow) using ProgMan and WorkerW to find a correct handle to the wallpaper window.
Okay, running the code and…TADAAAAAA…
I had a borderless, full screen window acting as wallpaper on my desktop and showing an HTML file. Now I was really impressed. But not for the last time…
Adding things
Since the original idea was to show system information on my virtual wallpaper, I needed a way to
- Access local hardware and software system information.
- Display the collected system information.
So, my next prompt looked like.
Me: “You did well! Let’s add some way to read system information and performance from hardware and forward it to the html file. The html file should act like a template. I want an update frequency of 1 second.”
AI: “Thank you! Sure, I can add system information to you HTML file…”
The assistant hooked up “OpenHardwareMonitor” (Open Hardware Monitor) to my C# code, made the functions I needed, and tweaked the HTML to show CPU and GPU usage. All I had to do was run a simple “dotnet add” command to get the library installed.
After I asked for help with display issues like flickering, the assistant added double buffering to the window and tweaked the update code. This even led to a simple template engine! In just a couple of hours, I pretty much had what I wanted.
So, I began to look for some challenges.
Me: “Ah, instead of having the metrics inserted into the html directly, can we use vuetify as template engine”
AI: “Sure, lets change the code so it uses Vuetify as a template engine…”
It worked right away. It nailed the functions and added the metrics perfectly. Awesome!
After a few more prompts, I got a table with all the available metrics on my screen. But I felt like not all the metrics I expected were there. I looked up “OpenHardwareMonitor” and found out it hadn’t been updated in over two years. Another challenge popped up!
Me: “It seems like OpenHardwareMonitorLib is rather outdated and no longer maintained. Is there any similar library or fork?”
AI: “You are right. OpenHardwareMonitor seem to be no longer maintained. There is fork called “LibreHardwareMonitor” that is well maintained…”
The assistant altered the code, provided the “dotnet add…” command, and the next run showed all system metrics I hoped for.
Further additions like “I want a tray icon with a menu structure ‘Quit’, ‘About’ and ‘Config’”, “I want to have CLI parameters for the template file, x and y position and width and height of the window”, “I want a config dialog to the x and y position of the window and want to define with and height. I also want a slider to adjust update speed from 250ms to 2500ms in 250ms steps”, and “I want to have a menu with items to load and save the configuration into a Json file within the config dialog” went flawlessly. I also wanted some “design” tweaks like padding around the config window border and moving the button positions, and the assistant nailed every request without a hitch.
After roughly 6 hours of chatting with AI, I ended up with a neat little product that, with just a bit of polishing and some long-term testing, has almost the same features as those commercial ones you find in the Microsoft Store. By now, the code’s gotten so complex that it’s a nightmare to manage. All in one file with way too many lines. No one but the AI itself can really handle it.
Splitting things
So, my next challenge to the AI was:
Me: “Can you split up the code into different files? I want separate files for all classes, like configuration handling, configuration dialog, the tray icon and the about dialog.”
AI: “Sure, let me split you code into files for you”
This one took a while, probably the longest response time in the whole experiment. When the assistant finally finished, I got a bunch of .cs files, all nicely named like I asked. But each file had several syntax errors, and the project wouldn’t start anymore.
I used the “Copilot – Fix” option for each error, which sometimes fixed one problem but created a few new ones (mostly undefined variables or duplicate definitions). After a couple of rounds of fixing (always starting with the first error in the file seems to help), I had a working project again – this time in a format that’s easier for humans to handle. Starting the project worked fine again, so even though splitting files might not work perfectly right away, you can get things sorted out with a bit of help from your AI without needing to code yourself.
Let’s start “imagining”, part 2: HTML, CSS, JavaScript… and a Debugger.
I was super excited about how far we got in such a short time. So, I was thrilled to finally be doing what I wanted – making a cool web dashboard that I could use as my Notebook wallpaper. I just got this formidable new notebook from panagenda with a powerful Intel I9 CPU and Nvidia RTX GPU, so I wanted to create something fresh and unique. My first move was to add some CSS for background colors and fonts. But… nothing happened. No color changes. Fonts didn’t load. Zilch. And I had absolutely no clue how to debug a browser window running as a wallpaper.
Me: “I do not see anything I define in a CSS file that is loaded by my html template. Can you check if everything is correct?”
AI: “Let me check the HTML file…”
So, it basically said everything looks fine, but I could use the webview2 inspector to double-check if everything’s loaded properly. It even gave me the keyboard shortcut to open the inspector (Ctrl-Shift-I).
I checked the inspector and saw an error message: “Unable to load local resource…”. Out of habit, I copied it into my browser search and found out that WebView2 can’t use the file:// protocol because of security reasons. “How annoying”, I thought. But hey, this is an AI project – so I pasted the same error message right into my AI chat and waited to see what happens.
What the assistant did next, was another key moment that made me gasp. It was like “This error is caused because of security reasons webview2 cannot load any local resources with the file:// protocol. However, you can implement a basic http server to serve local files to your html file…”. The answer was pretty long, diving into configuration options and different file types. But in the end, I had a small local http server running on port 8080 in my project. It served files from a specific directory, and the HTML code automatically read my CSS file right. (I totally didn’t worry about the security issues of running an unprotected local http service instead of just loading a local file. That wasn’t part of the experiment).
That impressive result got me thinking:
Me: “Thank you! The webview2 inspector is very helpful, is there a way to enable and disable it through our tray icon menu?”
AI: “Sure, let me add a tray icon menu to toggle the webview2 inspector window…”
Sure, it worked right away.
Let’s push the boundaries
I was super pumped about the results, so I thought I’d push the boundaries. Can GitHub Copilot turn a CodePen example of some awesome visualization into usable code for my app? I stumbled upon this really cool lightning simulation by Akimitsu Hamamuro (Akimitsu Hamamuro on CodePen) a while back. It’s written in JavaScript, and I decided to give it a go with the following prompt:
Me: “I found this cool lightning on codepen (https://codepen.io/akm2/pen/DbNJXr) and want to use it in my html template. Can you create all necessary files in my www root directory and implement a single lightning in my template. I want to apply the template to a defined canvas and want to control start x/y and end x/y and all properties available in the codepen”
AI: “Sure I can help you with creating the necessary files to use the cool lightning of the codepen in you template…”
It didn’t go perfectly on the first try. The files were created, but there were some errors and script loading order issues. After a few rounds of “Copilot -> Fix” and pasting some console error messages from the WebView2 Inspector, it finally worked! I had fully controllable lightning right in my browser window, running in the background as animated wallpaper. Isn’t that awesome?
Alright, taking it a step further:
Me: “Can you create a lightning for each cpu core in the metrics. Use the utilization to alter amplitude and the number of child-lightnings for each lightning. I want the lightnings arranged in a circle, inner radius 100px and outer radius 300px, at the center of the screen”
The result really blew me away. Again.
Check out the short video below—GitHub Copilot nailed it! It even did the range calculations perfectly to make sure the amplitude and the number of child-lightnings were spot on, giving cool visualizations as the CPU usage went from 0% to 100%.

I threw in some extra info like time and date to show up on the left and right sides of the screen, just for kicks. Just to watch the AI assistant do its magic again. Honestly, it was tough to stop adding new stuff at this point.
Conclusion
I stared at the result for a while… 16 flashes showing my laptop’s CPU load. It was a full-on C# project that set a window as my desktop background image, included a web server to manage files for a Vuetify template, and a system info collector feeding data to the browser template. Configuration options, debug options—everything. And I didn’t write a single line of code. All I did was tweak some color values for the HTML background and the final lightning colors.
The AI hype finally got me hooked.
I moved away from software development to something new and different (still close enough to stay connected to something I’ve loved all my life) because programming just lost its charm. It wasn’t fun anymore. Things got too complex, technologies moved too fast, and there was never enough time to really dive into one thing before the next big thing came along.
But this? This felt totally different. It felt as good and familiar as coding back in the day, like 30 or 40 years ago. It brought the creative process front and center, pushing all the “what was that parameter?” and “what’s the right order for this?” stuff to the background.
When I showed a sneak peek of this blog article to my colleagues, their first reaction was: “Wow. That’s… really cool!”. At the same time some were worried that we might make the impression, that this is how we develop our solutions at panagenda.
That made me chuckle. Of course, we’re not. Asking an AI to cobble together some code snippets is far from actual software engineering. Creating top-notch products takes so much more effort, from brainstorming and planning to developing, testing, release planning, and finally, running and supporting them. Every step in the product development life cycle needs proper documentation and attention to all kinds of security and business issues.
But having a smart assistant at your side that can help whip up instant proof of concept’s, suggests fixes for errors, and even brings its own solutions for problems you haven’t thought of… now that is totally transformative to at least the experimental and creative side of software development.
So, is AI going to take away software developer jobs?
Well, with what I saw, I would clearly say, NO!
I think things are going to change big time. This could totally take “coding” to a whole new level and make app development way faster. In the future, no one will care about “10 years of C# experience” or “5 years of coding with REACT.” Programming languages and frameworks will still matter, but mostly for those building them or making libraries.
The main skill for app developers will be knowing how to ask the right questions and “phrase the right prompt” to get the best results from an AI. You’ve got to be able to read and check code and give clear instructions, so your AI-generated code meets all the technical stuff.
It’s really awesome to see developers let their creativity run wild again. They can focus on cool ideas instead of boring, repetitive coding tasks. This freedom means we’ll see more imaginative and impactful projects. It’s going to be so interesting to watch how this tech changes the software development scene. I am absolutely thrilled to witness this transformation unfold! The possibilities are endless, and the future of software development has never looked brighter.