Standards
Editorial standards
Robotics and Physical AI treats trust signals as part of the product. These standards describe how stories are produced, where AI assistance may be used, and what editorial obligations remain before publication.
Editorial review remains the gate
AI tools may assist with research organisation, summarisation, headline suggestions, or first-draft generation, but they do not replace editorial judgement. A story should not be treated as fit to publish until a human editor has assessed the framing, sourcing, and material claims.
Sourcing hierarchy
We prefer primary or close-to-primary material where practical, including official statements, public filings, regulator notices, research papers, product documentation, court records, and direct-source reporting. Secondary reporting can inform coverage, but should not be used lazily where better source material exists.
Fact, analysis, and uncertainty
Stories should distinguish established facts from interpretation, forecast, or editorial judgement. If evidence is incomplete, the copy should say so. Certainty should not be overstated to make a story read cleaner than the reporting supports.
Speed does not excuse sloppiness
This publication is designed to move quickly, but speed is not a defence for weak sourcing, invented context, or padded certainty. If a story cannot be responsibly verified at the required level, it should be held, narrowed, or framed more carefully.
Bylines and accountability
Every story should carry a named author or a clearly identified desk attribution. Author pages, policy pages, and source modules exist so readers can understand who published the work and what standards apply.
Changes, corrections, and disclosures
Material changes should be reflected clearly. If AI assistance materially shaped a workflow, that does not remove the publication's responsibility for the final output. Corrections and substantive updates should follow the public Corrections Policy.