Sorry, but for every chat log with one teenager who commited suicide due to AI, I'm sure you can find many more of people/teens with suicide thoughts or intent that are explicitly NOT doing it because of advice from AI systems.
I'm pretty sure AI has saved more lives than it has taken, and there's pretty strong arguments to say that someone whose thinking of committing suicide will likely be thinking about it with or without AI systems.
Yes, sometimes you really do "have to take one for the team" in regards to tragedy. Indeed, Charlie Kirk was literally talking about this the EXACT moment he took one for the team. It is a very good thing that this website is primarily not parents, as they cannot reason with a clear unbiased opinion. This is why we have dispassionate lawyers to try to find justice, and why we should have non parents primarily making policy involving systems like this.
Also, parents today are already going WAY to far with non consensual actions taken towards children. If you circumcised your male child, you have already done something very evil that might make them consider suicide later. Such actions are so normalized in the USA that not doing it will make you be seen as weird.
The relatively arbitrary cutoff at 18 is also an indication that this is a blunt tool, intended to alleviate some low-lying fruit of potential misuse but which will clearly miss the larger mark since there will be plenty of false positives (not to mention false negatives).
Some kids are mature enough from day one to never need tech overlords to babysit them, while others will need to be hand-held through adulthood. (I've been online since I was 12, during the wild and wooly Usenet and BBS days, and was always smart enough not to give personal info to strangers; I also saw plenty of pornographic images [paper] from an even younger age and turned out just fine, thank you.)
Maybe instead of making guesses about people's ages, when ChatGPT detects potentially abusive behavior, it should walk the user through a series of questions to ensure the user knows and understands the risks.