bibliolater , to AI stuff
@bibliolater@qoto.org avatar

On the trail of deepfakes, Drexel researchers identify ‘fingerprints’ of AI-generated video

"The lab’s tools use a sophisticated machine learning program called a constrained neural network. This algorithm can learn, in ways similar to the human brain, what is “normal” and what is “unusual” at the sub-pixel level of images and videos, rather than searching for specific predetermined identifiers of manipulation from the outset."

https://scienmag.com/on-the-trail-of-deepfakes-drexel-researchers-identify-fingerprints-of-ai-generated-video/

@science @engineering

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to AI stuff
@bibliolater@qoto.org avatar

"All tested LLMs performed poorly on medical code querying, often generating codes conveying imprecise or fabricated information. LLMs are not appropriate for use on medical coding tasks without additional research."

Soroush, A. et al. (2024) 'Large language models are poor medical coders — benchmarking of medical code querying,' NEJM AI [Preprint]. https://doi.org/10.1056/aidbp2300040. @science

bibliolater , to Technology
@bibliolater@qoto.org avatar

Oxford shuts down institute run by Elon Musk-backed philosopher

"The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. OpenAI chief executive Sam Altman, Microsoft founder Bill Gates and Tesla chief Musk all wrote blurbs for his 2014 bestselling book Superintelligence."

https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

@philosophy

bibliolater , to AI stuff
@bibliolater@qoto.org avatar

"Our research pioneers an innovative methodology for generating synthetic training data tailored to Old Aramaic letters. Our pipeline synthesizes photo-realistic Aramaic letter datasets, incorporating textural features, lighting, damage, and augmentations to mimic real-world inscription diversity. Despite minimal real examples, we engineer a dataset of 250 000 training and 25 000 validation images covering the 22 letter classes in the Aramaic alphabet."

Aioanei AC, Hunziker-Rodewald RR, Klein KM, Michels DL (2024) Deep Aramaic: Towards a synthetic data paradigm enabling machine learning in epigraphy. PLOS ONE 19(4): e0299297. https://doi.org/10.1371/journal.pone.0299297 @linguistics

18+ ErosBlog , to AI stuff
@ErosBlog@kinkyelephant.com avatar

I tested a murkyweb cryptocurrency-fueled AI-powered deepfake tool that advertises an "upload a photo, we'll remove the clothing" service. I tried it on a 1957 pic of long-deceased cabaret dancer Jenny Lee. It took only three clicks and worked disturbingly well. As I wrote on ErosBlog, this is software that can hurt people: http://www.erosblog.com/2024/04/17/noods-deep-and-otherwise/

ALT
  • Reply
  • Loading...
  • pluralistic , to AI stuff
    @pluralistic@mamot.fr avatar

    "The reality is you can't build a $100b industry around techn that's kind of useful, mostly in mundane ways, and that boasts perhaps small increases in productivity if and only if the people who use it fully understand its limitations.You certainly can't justify the kind of exploitation, extraction, and environmental cost the industry has been mostly getting away with, in part because people have believed lofty promises of someday changing the world."

    https://www.citationneeded.news/ai-isnt-useless-2/

    molly0xfff , to AI stuff
    @molly0xfff@hachyderm.io avatar

    I spent a long time experimenting with AI before finally writing about it in depth. It can be pretty useful — but is it worth it?

    https://www.citationneeded.news/ai-isnt-useless/

    pluralistic , to AI stuff
    @pluralistic@mamot.fr avatar

    > I find my feelings about are actually pretty similar to my feelings about : they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs.

    https://www.citationneeded.news/ai-isnt-useless-2/

    rabia_elizabeth , to AI stuff
    @rabia_elizabeth@mefi.social avatar

    , what is your publication's officially stated tolerance for AI-generated content in article submissions from authors?

    0% ? 5% ? 25 % ? Something else?

    And are you authorized to reject submissions for if an article exceeds that percentage?

    @editing @editors @writingcommunity

    CultureDesk , to AI stuff
    @CultureDesk@flipboard.social avatar

    Amazon is filled with garbage ebooks, often a result of keyword scrapers finding trending topics, and then so-called publishers using AI and cheap ghostwriters to generate books. "If, as they used to say, everyone has a book in them, AI has created a world where tech utopianists dream openly about excising the human part of writing a book — any amount of artistry or craft or even just sheer effort — and replacing it with machine-generated streams of text," writes Vox's Constance Grady. Here's her story about the underbelly of online self-publishing.

    https://flip.it/PQ1vEl

    @bookstodon

    bibliolater , to AI stuff
    @bibliolater@qoto.org avatar

    ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds

    "MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."

    https://www.psypost.org/chatgpt-hallucinates-fake-but-plausible-scientific-citations-at-a-staggering-rate-study-finds/

    @science @ai

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    rabia_elizabeth , to AI stuff
    @rabia_elizabeth@mefi.social avatar

    Martha Wells writes about creating an character that does not want what the humans want... and, unsurprisingly, about later-life diagnosis.

    Cc @marthawells

    Via @metafilter

    @actuallyautistic

    https://www.metafilter.com/203313/That-vast-astonishing-multiplicity-of-vision

    appassionato , to AI stuff
    @appassionato@mastodon.social avatar

    Secrets of Machine Learning: How It Works and What It Means for You by Tom Kohn, 2024

    Cutting through the mass of technical literature on machine learning and AI and the plethora of fear-mongering books on the rise of killer robots, Secrets of Machine Learning offers a clear-sighted explanation for the informed reader of what this new technology is, what it does, how it works, and why it's so important.

    @bookstodon



    ALT
  • Reply
  • Loading...
  • editor , to AI stuff
    @editor@floe.earth avatar

    Petition to ban the use of so-called AI-detectors in education. https://chng.it/k22JSrpG92

    @edutooters

    glynmoody , to AI stuff
    @glynmoody@mastodon.social avatar

    An Only Slightly Modest Proposal: If Companies Want More Content, They Should Fund Reporters, And Lots Of Them - https://www.techdirt.com/2024/04/11/an-only-slightly-modest-proposal-if-ai-companies-want-more-content-they-should-fund-reporters-and-lots-of-them/ annoying: @mmasnick has written the exact article I was about to pen...

    dantappan , to AI stuff
    @dantappan@better.boston avatar

    Wikipedia is gauging interest for an extension that uses AI to see if any claim is cited on Wikipedia ( meta.wikimedia.org )

    A prototype is available, though it's Chrome-only and English-only at the moment. How this'll work is you select some text and then click on the extension, which will try to "return the relevant quote and inference for the user, along with links to article and quality signals"....

    Wikipedia is gauging interest for an extension that uses AI to see if any claim is cited on Wikipedia ( meta.wikimedia.org )

    A prototype is available, though it's Chrome-only and English-only at the moment. How this'll work is you select some text and then click on the extension, which will try to "return the relevant quote and inference for the user, along with links to article and quality signals"....

    stephenwhq , to AI stuff
    @stephenwhq@mastodon.social avatar

    As part of my experiments with AI art, I commissioned human art. Which I prefer.

    I have a couple of pieces around the novella in progress from artist Paul Humphreys. One is a study of fire jays, birds local to Suncup Falls, and the other is a character study of Livvy and Marcus, two of the six main characters at the school. They are talented theatre kids and oh boy, do they know it.

    https://stephencox.substack.com/p/avoiding-sludge-state-using-feedback

    @bookstodon

    eff , to AI stuff
    @eff@mastodon.social avatar

    AI does what we teach it to do. Once you accept that, it becomes substantially easier to avoid doomsday scenarios. https://www.eff.org/deeplinks/2024/03/how-avoid-ai-apocalypse-one-easy-step

    NatureMC , to AI stuff
    @NatureMC@mastodon.online avatar

    Every time, you think it couldn't get any worse, a new revelation tops it off. As an author, I wonder how long it will take for the book market to be completely enshittified.
    Thank you for the ! ⬆️ @writers @bookstodon

    books

    seb_tmg , to AI stuff
    @seb_tmg@mastodon.cosmicnation.co avatar

    When people start to use definitions as sentient, and seflf-aware and apply it to AI I'm getting sad and start to belief more and more that part of our society is sliding even further of course than it has already gone. The letter A in AI says it all. I you want to apply sentience and self-awareness to AI it is all Artificial. It’s simulated. Humans have created truly sentient and self-aware beings for ages. It is called making and giving birth to babies.

    @consciousliving

    eric , to Technology
    @eric@social.coop avatar

    is traditionally used in France to reduce the moth population.

    Only a small proportion of French Jews emigrate to . These Binationals are subject to compulsory military service.

    This army prepared and launched the first in 2021: https://techhub.social/@estelle/111510965384428730

    A development team has designed a more efficient product, which a Frenchman has suggested calling Lavander: https://techhub.social/@estelle/112220409975979758 @palestine

    estelle , to Random
    @estelle@techhub.social avatar

    The terrible human toll in Gaza has many causes.
    A chilling investigation by +972 highlights efficiency:

    1. An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”

    2. An AI outputs "100 targets a day". Like a factory with murder delivery:

    "According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"

    1. "The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."

    🧶

    18+ estelle OP ,
    @estelle@techhub.social avatar

    Here is a follow-up of
    Yuval Abraham's investigation:

    "The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties"
    https://www.972mag.com/lavender-ai-israeli-army-gaza/

    @israel @ethics @military @idf

    estelle OP ,
    @estelle@techhub.social avatar

    It was easier to locate the individuals in their private houses.

    “We were not interested in killing operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

    Yuval Abraham reports: https://www.972mag.com/lavender-ai-israeli-army-gaza/

    (to follow) 🧶 @palestine @israel @ethics @military @idf @terrorism

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tech
  • insurance
  • testing
  • drbboard
  • updates
  • til
  • programming
  • bitcoincash
  • marketreserach
  • wanderlust
  • Sacramento
  • All magazines