logoalt Hacker News

sebmellentoday at 5:44 PM0 repliesview on HN

Delve did not even try to fake the reports well. They could have used AI tooling to write somewhat plausible Assertions of Management, but they just dropped in clear form submissions to the reports they provided. Here is an example from Cluely:

> We have prepared the accompanying description of Cluely, Inc., system titled "Cluely is a desktop AI assistant to give you answers in real-time, when you need it." throughout the period June 27, 2025 - September 27, 2025(description), based on the criteria set forth in the Description Criteria DC Section 200 2018 Description Criteria for a Description of a Service Organization’s System in a SOC 2 Report (description criteria).

> The description is intended to provide users with information about the "Cluely is a desktop AI assistant to give you answers in real-time, when you need it." that may be useful when assessing the risks arising from interactions with Cluely, Inc. system, particularly information about the suitability of design and operating effectiveness of Cluely, Inc. controls to meet the criteria related to Security, Availability, Processing Integrity, Confidentiality and Privacy set forth in TSP Section 100, 2017 Trust Services Principles and Criteria for Security, Availability, Processing Integrity, Confidentiality and Privacy (applicable trust services criteria).

I mean, just re-read this sentence:

> The description is intended to provide users with information about the "Cluely is a desktop AI assistant to give you answers in real-time, when you need it." that may be useful

It makes no sense at all.

Someone implemented the code to automate this report mill, and didn't think to even smooth it out with an LLM! There was clear intent here.

To imagine that an auditor reviewed and stamped this as a coherent body of work beggars belief.