You should look up the local llama and stable diffusion sub Reddits to see what is possible to do locally. If you have 24gb of vram you can do A LOT!