4 Comments

Excellent. Thanks for the pointer to Thierer.

Expand full comment

I suspect the issue is partly that there aren't many AIs yet, and some on the right are concerned they are woke. A new page makes the point that they will be embedded in the office suites of Microsoft and Google soon and nudge most of the words written on the planet, which suggests an understandable concern, but that market competition can address that if 3rd parties provide AIs with different biases:

http://PreventBigBrother.com

I suggests the need for something like Section 230 to prevent liability issues from undermining the rise of AI, and argues against government control of AI as being akin to government control of speech that we have a 1st amendment to protect against. While it protects the rights of listeners to hear speech, its unclear if it'll protect the creation of that speech they wish to hear by a machine rather than a human.

Expand full comment

In the clips I saw (maybe ChatGPT did the editing) even Blumenthal and Durbin seemed skeptical of some of the regulation ideas.

""Number one, a safety review, like we use with the [Federal Drug Administration] prior to widespread deployment," said Marcus." What about efficacy standards😉?

And what could go wrong? https://technomancers.ai/eu-ai-act-to-target-us-open-source-software/

Expand full comment