I am right now implementing an imagining pipeline using OpenCV and TypeScript.
I have never used OpenCV specifically before, and have little imaging experience too. What I do have though is a PhD in astrophysics/statistics so I am able to follow along the details easily.
Results are amazing. I am getting results in 2 days of work that would have taken me weeks earlier.
ChatGPT acts like a research partner. I give it images and it explains why current scoring functions fails and throws out new directions to go in.
Yes, my ideas are sometimes better. Sometimes ChatGPT has a better clue. It is like a human collegue more or less.
And if I want to try something, the code is usually bug free. So fast to just write code, try it, throw it away if I want to try another idea.
I think a) OpenCV probably has more training data than circuits? and b) I do not treat it as a desperate student with no knowlegde.
I expect to have to guide it.
There are several hundred messages back and forth.
It is more like two researchers working together with different skill sets complementing one another.
One of those skillsets being to turn a 20 message conversation into bugfree OpenCV code in 20 seconds.
No, it is not providing a perfect solution to all problems on first iteration. But it IS allowing me to both learn very quickly and build very quickly. Good enough for me..
That's a good use case, and I can easily imagine that you get good results from it because (1) it is for a domain that you are already familiar with and (2) you are able to check that the results that you are getting are correct and (3) the domain that you are leveraging (coding expertise) is one that chatgpt has ample input for.
Now imagine you are using it for a domain that you are not familiar with, or one for which you can't check the output or that chatgpt has little input for.
If either of those is true the output will be just as good looking and you would be in a much more difficult situation to make good use of it, but you might be tempted to use it anyway. A very large fraction of the use cases for these tools that I have come across professionally so far are of the latter variety, the minority of the former.
And taking all of the considerations into account:
- how sure are you that that code is bug free?
- Do you mean that it seems to work?
- Do you mean that it compiles?
- How broad is the range of inputs that you have given it to ascertain this?
- Have you had the code reviewed by a competent programmer (assuming code review is a requirement)?
- Does it pass a set of pre-defined tests (part of requirement analysis)?
- Is the code quality such that it is long term maintainable?