Using AI to detect guns. Media bias is alive, AI confirms. An uncensored ChatGPT, released.
GM, and happy Thursday! 🌞 Welcome to another installment of the AIcyclopedia newsletter.
Here’s what we’ve got for ya today:
AI gun detection tech to be used in schools
High school AI project reports media bias 📣
FreedomGPT: The uncensored chatGPT 🗽
Open letter to ⏸️ AI development: Top 5 points
Top AI news you need to know: Part ☝🏾
1/ Can AI technology save our schools from mass shootings?
Sometimes, AI crosses paths with news that isn’t always fun to talk about.
But with the recent mass shooting at The Covenant School, we feel it’s only right that we bring to light AI technology that could benefit the future safety of our children.
[The news] According to Fox News, schools are turning to AI to help detect school shooters via camera technology.
A school district located in Charles County, Maryland will be one of the first to install this AI technology into their monitoring system.
[AI used] There are currently 2 AI gun detection companies that are being used and offering their services: ZeroEyes and Omnilert.
Both have a similar goal: to detect guns/weapons, warn people ahead of time, and protect people who are in danger.
Jason Stoddard, Director of School Safety and Security for Charles County Public Schools, tells Fox News that "This artificial intelligence has the ability to be able to identify a weapon, to assess what’s going on and how that person is acting.”
[Why this matter] In 2023, there will already have been 130 mass shootings. And according to CNN, lawmakers and Congress are still reluctant to make any concrete decisions surrounding gun control.
So if AI can help save even one life in the near future, then it’s worth trying.
**
2/ Virginia student uses AI to expose racial bias in media
Does media bias exist? Well, according to one 18-year-old, it does. And she used AI to help her prove it.
[The news] Emily Ocasio, an 18-year-old from Virginia, wanted answers surrounding media bias and did so by creating an AI program that analyzed media coverage of black homicide victims.
Her findings may or may not surprise you: Black victims were '“less likely to be humanized in news coverage, even when compared to their White counterparts”.
But what does it mean to "humanize" a victim in news coverage?
It means presenting them "as a person, not just a statistic."
Beyond criminal facts, the news should provide additional information about the victim (ex., family, occupation, loved ones).
[The AI] Ocasio's program analyzed FBI records of homicides from 1976 to 1984 and how they were talked about in The Boston Globe. Her research found that:
Black men under the age of 18 were 30% less likely to receive humanizing coverage than White counterparts
Black women were 23% less likely to be humanized.
As a result of her research, Ocasio was awarded second place in the Regeneron Science Talent Search, a prestigious competition that recognizes exceptional high school students for their scientific research.
Faith in humanity: Restored ✅
Here’s Emily presenting her project:
Want to sponsor a future newsletter and share your business with our growing list of followers?
Go ahead and send an email to [daniel@aicyclopedia.com].
This space could be yours…🥳
Top AI news you need to know: Part ✌🏼
3/ FreedomGPT: The uncensored chatGPT
What if there was a chatbot that could answer any question, without any ethical guardrails or censorship?
Age of AI, an AI venture capital firm, has recently released a new chatbot called FreedomGPT that claims to do just that.
[The AI] Built on Alpaca, an open source AI technology released by Stanford University computer scientists, FreedomGPT is not related to OpenAI.
FreedomGPT excludes safety filters programmed by humans and claims to answer any question, no matter how provocative or controversial.
[The concerns] Buzzfeed says that based on their tests, FreedomGPT's first answers were surprisingly normal and stayed true to moral principles.
But as they asked more and more difficult and problematic questions, it still complied without a problem.
For example, a member of Buzzfeed was able to successfully get answers on how to make a bomb at home or what websites to visit to download sexual abuse videos.
At this point, you might be asking yourself: How dangerous is this?
According to their website:
AI safety cannot be achieved through censorship. Attempting to do so is analogous to censoring free speech in the name of safety. Ultimately, AI is merely a reflection of the models it was trained on. AI safety must be addressed systemically and through transparency.
John Arrow, the founder of Age of AI, supports AI guardrails in some cases, but ideologically he believes in people having access to an AI experience without any guardrails.
Looking forward…the company plans to release an open-source version. Anyone and everyone can play with the system and customize it to their needs.
**
4/ Top 5 points from open letter to pause AI development
Elon Musk, along with 1,000+ others high-level figures, have signed an open letter requesting that “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”.
Here’s the 1-minute snippet:
AI systems with human-competitive intelligence can pose profound risks to society and humanity, and require careful planning and management.
AI labs are engaged in an out-of-control race to develop and deploy ever more powerful AI systems that no one can understand, predict or control.
There is a need to develop shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
AI developers must work with policymakers to accelerate the development of robust AI governance systems that include new regulatory authorities, oversight and tracking of highly capable AI systems, liability for AI-caused harm, and well-resourced institutions to cope with economic and political disruptions.
There is a need to pause for at least 6 months the training of AI systems more powerful than GPT-4 and refocus on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
The latest status update on the signature list:
Your AI news break 🥳
AI News Tid Bits…
👉🏻Digital service company Compass UOL used Generative AI to complete a software project in 1/2 the time compared to their software development team. Even so, the CEO states that despite AI’s speed, “software developers are needed more than ever.”
👉🏻 Google Assistant team is restructuring as Bard integration becomes a top priority for the tech giant. Out with the old and in with the new.
👉🏻 Pope in puffer jacket creator shares with Buzzfeed why he created the image. Let’s just say he wasn’t “all there” in the head and thought “it would be funny."
Try this AI tool 🪄 : Podsqueeze AI
According to its Product Hunt page, Podsqueeze is a user-friendly AI-powered tool that allows podcasters to generate and customize quality Transcripts, Show Notes, Timestamps, Newsletters, Social Posts, Tweets, and other types of content from podcast audio or video.
Just launched and already trending, the podcast community is buzzzzing.
[What people are saying]:
I really love Podsqueeze. It significantly speeds up the creation of content around my podcast episode, e.g. SM posts, blog, show notes, newsletter. Nice work!
We are able to upload files and then use the show notes, timestamps and other resources to help generate more content on behalf of our clients.
➜ Click here to check out sample content generated by Podsqueeze.
➜ Click here to try it out yourself for free.
That’s it for today!
Be sure to tune in Sunday for more AI news!
Thank you for choosing us as your AI news source, and don’t forget to follow us on Twitter!