Discussion about this post

User's avatar
Clayton Ramsey's avatar

Very interesting. I especially like the idea of blocking lethal means. Obviously there are workarounds for this type of regulation but for consumer-grade LLMs it seems sensible.

Transparent artificiality also makes sense. Remembering what it is you’re dealing with is always good. That’s part of the approach I use to manage my interactions with LLM personalities.

So I encourage that, even when I indulge in imaginative or performative activities.

r_rx's avatar

Well constructed article. The only area I want to press on a little further is: personalization. We agree on not exploiting. But without adaptive personalization grounded in behavioral science, you can’t truly detect drift or protect users.

No posts

Ready for more?