There's a fantastic 2010 Ted Chiang story exploring just that, in which the most universally useful, stable and emotionally palatable AI constructs are those that were actually raised by human trainers living with them for a while.
https://en.wikipedia.org/wiki/The_Lifecycle_of_Software_Obje...
It might be just me but I found this story incredibly boring and difficult to get through, so much so that I haven't gone back to finish the rest of Exhalation yet. The ideas are very interesting, like all his stories, but the plot and characters feel like bare-bones scaffolding, just there so we can call it a story instead of an essay. I think it could have worked as a short story, but as an almost full-length novel I really needed something more to feel engaged. The ending is also kind of strange, he introduces a brand-new philosophical conundrum and then just ends the story instead of exploring it.
It's such a good story that one. Feels incredibly relevant and timely today.
Unfortunately Ted Chiang has now started doing a lot of AI commentary, under the belief that because he wrote a story about something called AI, he knows how real-life things work, simply because they're also called AI.
Noone can ever escape metaphor-based development in the AI field.