Audio deepfake

http://dbpedia.org/resource/Audio_deepfake

The audio deepfake is a type of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices (due to throat disease or other medical problems) to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding speech translation services. rdf:langString
rdf:langString Audio deepfake
xsd:integer 65679106
xsd:integer 1110914293
rdf:langString The audio deepfake is a type of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices (due to throat disease or other medical problems) to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding speech translation services. Audio deepfakes, recently called audio manipulations, are becoming widely accessible using simple mobile devices or personal PCs. These tools have also been used to spread misinformation using audio. This has led to cybersecurity concerns among the global public about the side effects of using audio deepfakes. People can use them as a logical access voice spoofing technique, where they can be used to manipulate public opinion for propaganda, defamation, or terrorism. Vast amounts of voice recordings are daily transmitted over the Internet, and spoofing detection is challenging. However, audio deepfake attackers have targeted not only individuals and organizations but also politicians and governments. In early 2020, some scammers used artificial intelligence-based software to impersonate the voice of a CEO to authorize a money transfer of about $35 million through a phone call. Therefore, it is necessary to authenticate any audio recording distributed to avoid spreading misinformation.
xsd:nonNegativeInteger 34585

data from the linked data cloud