Despite spending a bunch of my free time learning about and playing with various aspects of AI, the most interesting application lately has been in my day job.
I was trying to create a kubernetes pod that I could use in a Jenkins pipeline to build docker images that I was pushing to a repository. The central problem I was trying to solve was having a pod that contained 2 containers, one that was a "Docker in docker"
dind container, that had the docker engine on it, and a build container
builder, which had a build script, that ran docker commands that were executed by the
I started by copying a Jenkinsfile from another project in the same organization that had been doing that exact thing for years. Trouble was, the docker container from years ago couldn't run the commands that my current script needed. Specifically, running
pip install -r requirements.txt kept getting an error that said that it could not start a new thread. I upgraded from
docker:24.0.6-dind and then my
builder container could not connect to my
dind container anymore.
Reading and Searching
The first thing I turned to was the docker hub page for docker (which is not one of the top 10 results if you search for docker on docker hub!) The page talks about why you probably don't want to do that. Then it suggests you read a particular blog that once told you that you didn't want to do it, and then it says it is fine now, but doesn't tell you how.
After the link to the blog post, the docker hub page says that if you are sure that you still want to run docker in docker, despite their warnings, here's how... It then gives you docker run commands for the
builder pods, but they are old, using the
-v flag to mount drives, and translating them to a k8s container definition still left my builder not able to see my docker.
Google searches turned up more blog posts and github repos. All very out of date. I found a very detailed description of how to set up a jenkins job to build a docker container in kubernetes for docker version 18. I found descriptions of a docker-out-of-docker approach, that was no longer considered necessary by version 18.
The things I was finding were probably really good resources at the time they were written. They were certainly very clear. But they are obsolete now. If there are newer posts explaining how to do the same thing in docker version 24, they don't have the search rank to be found.
Chat enters the chat
Finally, I turned to Chat-GPT. I decided to start from the docker run commands I found on the docker hub page. I figured I had made a mistake in translating the volume specifications to my kubernetes pod definition. It provided me with definitions of persistent volume claims, and a pod spec that mounted the volumes. It also cautioned that docker-in-docker was "unconventional."
I asked if instead of PVCs, I could mount the volumes with a host path. Its first and last paragraphs explained why that was not recommended for most cases, but in between it provided a pod specification with the volumes pointing to paths on the host.
I then gave it the spec of the other pod I was trying to run in the same container, and the error I was getting. It said that
To resolve this, you'll need to ensure that:
The Docker-in-Docker container is running with the exposed Docker daemon API endpoint.
The client container is correctly configured to communicate with this endpoint.
And then gave me 4 long, detailed steps to follow to troubleshoot. I followed up with another error. It responded with 6 "things to consider" and then "Here's a detailed plan" which had 3 steps. An updated error message caused it to list new implications, and give a new plan.
At its suggestion, I tried shelling into the
builder pod and running docker info. And finally, the problem was clear:
ERROR: Error response from daemon: Client sent an HTTP request to an HTTPS server.
At which point, Chat-GPT gave me the environment settings I needed to enable TLS on the
Was that exhausting to read? This is the very distilled essence of about 3 days of work. Chat-GPT doesn't have timestamps when you look at past conversations. But I think that troubleshooting discussion I provided was under 2 hours.
Would it have been better to go to Chat-GPT sooner? I actually had gone to chat-gpt earlier in the process, just after I read the first cautionary blog post, and it rehashed the content from the post.
When I had the specific code I was trying to run, which contained the specific version of docker I was trying to use, I was able to get to a solution pretty quickly. Even then, though, I had to understand my situation and its answers enough to get it to make the adjustments I needed.
When I encounter something I can't solve on my own, I look for documentation, and then I turn to google searches. I am often googling specific error messages. Chat-gpt is really powerful, but I find I have to have enough understanding to get it to answer the right questions.
But sometimes, it makes sense to go to Chat-GPT first though. Friday, I wanted to modify a bash script so that it could handle a specific error without failing, but fail the job on all other errors. I couldn't be bothered to look up the nuances of bash syntax. It presented several options, and after a few back-and-forth discussions, it wrote my script for me.
Chat-GPT is another useful tool that can make a developer much more productive. But the more you understand your problem, the more you will get out of its solutions.