U bent hier

Filosofie

Fallout from St. Norbert's mass firing of humanities junior faculty

Leiter Reports: A Philosophy Blog - 27 maart 2024 - 3:44pm
In the wake of this, I've heard from a number of prominent moral and political philosophers who have now cancelled speaking engagements (including for their Killeen Lecture series) at St. Norbert College to protest the shabby treatment of the humanities,... Brian Leiter

On retiring

Leiter Reports: A Philosophy Blog - 27 maart 2024 - 3:27pm
An apt comment, from the earlier thread, from philosopher Paul Guyer (recently emeritus at Brown University) that deserves special notice: We have it good in philosophy and other humanities fields: retiring does not necessitate the end of research and writing,... Brian Leiter

Talk about having a change of heart

Leiter Reports: A Philosophy Blog - 27 maart 2024 - 1:11pm
From NeoNazi to observant Jew! Brian Leiter

Epiphenomenalism about consciousness

Leiter Reports: A Philosophy Blog - 27 maart 2024 - 12:01pm
Philosopher Helen Yetter-Chappell sets out the case. Brian Leiter

Trump, the Mafioso, an ongoing saga

Leiter Reports: A Philosophy Blog - 26 maart 2024 - 3:16pm
He's now embracing it openly. (Thanks to Steve Sverdlik for the pointer.) Brian Leiter

Another UK university crisis brewing: University of Essex

Leiter Reports: A Philosophy Blog - 26 maart 2024 - 1:02pm
According to the academic staff union, the administratio has proposed "a new Academic Framework, including (among other things) a *45-week* teaching period, up to *3* entry points for students within the year, and sweeping rigid changes to module structure." Comments... Brian Leiter

What is it like to be "manic"?

Leiter Reports: A Philosophy Blog - 26 maart 2024 - 11:53am
Philosopher Paul Lodge discusses. (This is from a few years ago, but I only just came across this interesting essay.) Brian Leiter

SUNY-Fredonia does indeed axe philosophy and a dozen other programs

Leiter Reports: A Philosophy Blog - 25 maart 2024 - 3:09pm
Here. [Link fixed] As we noted previously, this will surely be an issue in Professor Kershnar's lawsuit (i.e., that this was a stealth way of violating his First Amendment rights). (Thanks to Brock Sides for the pointer.) Brian Leiter

I'd settle for this as an epitaph

Leiter Reports: A Philosophy Blog - 25 maart 2024 - 1:40pm
I've been fortunate over the last few years to have a steady ally on issues pertaining to academic freedom, and UChicago's Kalven Report, in the eminent biologist Jerry Coyne, now emeritus here. A propos a recent misrepresentation of the Kalven... Brian Leiter

In Memoriam: John F. Malcolm (1930-2023)

Leiter Reports: A Philosophy Blog - 25 maart 2024 - 1:06pm
Professor Malcolm, a longtime member of the Department of Philosophy at the University of California at Davis, where he was emeritus, died in September 2023. A scholar of ancient philosophy, he was best-known for his work on Plato. The UC... Brian Leiter

Akrasia in The Guardian

Leiter Reports: A Philosophy Blog - 25 maart 2024 - 11:50am
A rather substantial piece. (Thanks to Eric Wolf for the pointer.) Brian Leiter

In Memoriam: William W. Tait (1929-2024)

Leiter Reports: A Philosophy Blog - 22 maart 2024 - 7:33pm
Professor Tait, a leading figure in philosophy of mathematics, was emeritus at the University of Chicago, where he taught for nearly a quarter-century. Matt Boyle, Chair of the Department here, kindly shared a memorial notice, that is below the fold.... Brian Leiter

A new podcast series of "conversations" from Pitt's Center for the Philosophy of Science

Leiter Reports: A Philosophy Blog - 22 maart 2024 - 3:08pm
The first one is here. (Thanks to Edouard Machery for the pointer.) Brian Leiter

Supporting content compliance using Generative AI

Story Needle - 21 maart 2024 - 11:53pm

Content compliance is challenging and time-consuming. Surprisingly, one of the most interesting use cases for Generative AI in content operations is to support compliance.

Compliance shouldn’t be scary

Compliance can seem scary. Authors must use the right wording lest things go haywire later, be it bad press or social media exposure, regulatory scrutiny, or even lawsuits. Even when the odds of mistakes are low because the compliance process is rigorous, satisfying compliance requirements can seem arduous. It can involve rounds of rejections and frustration.

Competing demands. Enterprises recognize that compliance is essential and touches more content areas, but scaling compliance is hard. Lawyers or other experts know what’s compliant but often lack knowledge of what writers will be creating. Compliance is also challenging for compliance teams. 

Both writers and reviewers need better tools to make compliance easier and more predictable.

Compliance is risk management for content

Because words are important, words carry risks. The wrong phrasing or missing wording can expose firms to legal liability. The growing volume of content places big demands on legal and compliance teams that must review that content. 

A major issue in compliance is consistency. Inconsistent content is risky. Compliance teams want consistent phrasing so that the message complies with regulatory requirements while aligning with business objectives.

Compliant content is especially critical in fields such as finance, insurance, pharmaceuticals, medical devices, and the safety of consumer and industrial goods. Content about software faces more regulatory scrutiny as well, such as privacy disclosures and data rights. All kinds of products can be required to disclose information relating to health, safety, and environmental impacts.  

Compliance involves both what’s said and what’s left unsaid. Broadly, compliance looks at four thematic areas:

  1. Truthfulness
    1. Factual precision and accuracy 
    2. Statements would not reasonably be misinterpreted
    3. Not misleading about benefits, risks, or who is making a claim
    4. Product claims backed by substantial evidence
  2. Completeness
    1. Everything material is mentioned
    2. Nothing is undisclosed or hidden
    3. Restrictions or limitations are explained
  3. Whether impacts are noted
    1. Anticipated outcomes (future obligations and benefits, timing of future events)
    2. Potential risks (for example, potential financial or health harms)
    3. Known side effects or collateral consequences
  4. Whether the rights and obligations of parties are explained
    1. Contractual terms of parties
    2. Supplier’s responsibilities
    3. Legal liabilities 
    4. Voiding of terms
    5. Opting out
Example of a proposed rule from the Federal Trade Commission source: Federal Register

Content compliance affects more than legal boilerplate. Many kinds of content can require compliance review, from promotional messages to labels on UI checkboxes. Compliance can be a concern for any content type that expresses promises, guarantees, disclaimers, or terms and conditions.  It can also affect content that influences the safe use of a product or service, such as instructions or decision guidance. 

Compliance requirements will depend on the topic and intent of the content, as well as the jurisdiction of the publisher and audience.  Some content may be subject to rules from multiple bodies, both governmental regulatory agencies and “voluntary” industry standards or codes of conduct.

“Create once, reuse everywhere” is not always feasible. Historically, complaince teams have relied on prevetted legal statements that appear at the footer of web pages or in terms and conditions linked from a web page. Such content is comparatively easy to lock down and reuse where needed.

Governance, risk, and compliance (GRC) teams want consistent language, which helps them keep tabs on what’s been said and where it’s been presented. Reusing the same exact language everywhere provides control.

But as the scope of content subject to compliance concerns has widened and touches more types of content, the ability to quarantine compliance-related statements in separate content items is reduced. Compliance-touching content must match the context in which it appears and be integrated into the content experience. Not all such content fits a standardized template, even though the issues discussed are repeated. 

Compliance decisions rely on nuanced judgment. Authors may not think a statement appears deceptive, but regulators might have other views about what constitutes “false claims.” Compliance teams have expertise in how regulators might interpret statements.  They draw on guidance in statutes, regulations, policies, and elaborations given in supplementary comments that clarify what is compliant or not. This is too much information for authors to know.

Content and compliance teams need ways to address recurring issues that need to be addressed in contextually relevant ways.

Generative AI points to possibilities to automate some tasks to accelerate the review process. 

Strengths of Generative AI for compliance

Generative AI may seem like an unlikely technology to support compliance. It’s best known for its stochastic behavior, which can produce hallucinations – the stuff of compliance nightmares.  

Compliance tasks reframe how GenAI is used.  GenAI’s potential role in compliance is not to generate content but to review human-developed content. 

Because content generation produces so many hallucinations, researchers have been exploring ways to use LLMs to check GenAI outputs to reduce errors. These same techniques can be applied to the checking of human-developed content to empower writers and reduce workloads on compliance teams.

Generative AI can find discrepancies and deviations from expected practices. It trains its attention on patterns in text and other forms of content. 

While GenAI doesn’t understand the meaning of the text, it can locate places in the text that match other examples–a useful capability for authors and compliance teams needing to make sure noncompliant language doesn’t slip through.  Moreover, LLMs can process large volumes of text. 

GenAI focuses on wording and phrasing.  Generative AI processes sequences of text strings called tokens. Tokens aren’t necessarily full words or phrases but subparts of words or phrases. They are more granular than larger content units such as sentences or paragraphs. That granularity allows LLMs to process text at a deep level.

LLMs can compare sequences of strings and determine whether two pairs are similar or not. Tokenization allows GenAI to identify patterns in wording. It can spot similar phrasing even when different verb tenses or pronouns are used. 

LLMs can support compliance by comparing text and determining whether a string of text is similar to other texts. They can compare the drafted text to either a good example to follow or a bad example to avoid. Since the wording is highly contextual, similarities may not be exact matches, though they consist of highly similar text patterns.

GenAI can provide an X-ray view of content. Not all words are equally important. Some words carry more significance due to their implied meaning. But it can be easy to overlook special words embedded in the larger text or not realize their significance.

Generative AI can identify words or phrases within the text that carry very specific meanings from a compliance perspective. These terms can then be flagged and linked to canonical authoritative definitions so that writers understand how these words are understood from a compliance perspective. 

Generative AI can also flag vague or ambiguous words that have no reference defining what the words mean in the context. For example, if the text mentions the word “party,” there needs to be a definition of what is meant by that term that’s available in the immediate context where the term is used.

GenAI’s “multimodal” capabilities help evaluate the context in which the content appears. Generative AI is not limited to processing text strings. It is becoming more multimodal, allowing it to “read” images. This is helpful when reviewing visual content for compliance, given that regulators insist that disclosures must be “conspicuous” and located near the claim to which they relate.

GenAI is incorporating large vision models (LVMs) that can process images that contain text and layout. LVMs accept images as input prompts and identify elements. Multimodal evaluations can evaluate three critical compliance factors relating to how content is displayed:

  1. Placement
  2. Proximity
  3. Prominence

Two writing tools suggest how GenAI can improve compliance.  The first, the Draft Analyzer from Bloomberg Law, can compare clauses in text. The second, from Writer, shows how GenAI might help teams assess compliance with regulatory standards.

Use Case: Clause comparison

Clauses are the atomic units of content compliance–the most basic units that convey meaning. When read by themselves, clauses don’t always represent a complete sentence or a complete standalone idea. However, they convey a concept that makes a claim about the organization, its products, or what customers can expect. 

While structured content management tends to focus on whole chunks of content, such as sentences and paragraphs, compliance staff focus on clauses–phrases within sentences and paragraphs.  Clauses are tokens.

Clauses carry legal implications. Compliance teams want to verify the incorporation of required clauses and to reuse approved wording.

While the use of certain words or phrases may be forbidden, in other cases, words can be used only in particular circumstances.  Rules exist around when it’s permitted to refer to something as “new” or “free,” for example.  GenAI tools can help writers compare their proposed language with examples of approved usage.

Giving writers a pre-compliance vetting of their draft. Bloomberg Law has created a generative AI plugin called Draft Analyzer that works inside Microsoft Word. While the product is geared toward lawyers drafting long-form contracts, its technology principles are relevant to anyone who drafts content that requires compliance review.

Draft Analyzer provides “semantic analysis tools” to “identify and flag potential risks and obligations.”   It looks for:

  • Obligations (what’s promised)
  • Dates (when obligations are effective)
  • Trigger language (under what circumstances the obligation is effective)

For clauses of interest, the tool compares the text to other examples, known as “precedents.”  Precedents are examples of similar language extracted from prior language used within an organization or extracted examples of “market standard” language used by other organizations.  It can even generate a composite standard example based on language your organization has used previously. Precedents serve as a “benchmark” to compare draft text with conforming examples.

Importantly, writers can compare draft clauses with multiple precedents since the words needed may not match exactly with any single example. Bloomberg Law notes: “When you run Draft Analyzer over your text, it presents the Most Common and Closest Match clusters of linguistically similar paragraphs.”  By showing examples based on both similarity and salience, writers can see if what they want to write deviates from norms or is simply less commonly written.

Bloomberg Law cites four benefits of their tool.  It can:

  • Reveal how “standard” some language is.
  • Reveal if language is uncommon with few or no source documents and thus a unique expression of a message.
  • Promote learning by allowing writers to review similar wording used in precedents, enabling them to draft new text that avoids weaknesses and includes strengths.
  • Spot “missing” language, especially when precedents include language not included in the draft. 

While clauses often deal with future promises, other statements that must be reviewed by compliance teams relate to factual claims. Teams need to check whether the statements made are true. 

Use Case: Claims checking

Organizations want to put a positive spin on what they’ve done and what they offer. But sometimes, they make claims that are subject to debate or even false. 

Writers need to be aware of when they make a contestable claim and whether they offer proof to support such claims.

For example, how can a drug maker use the phrase “drug of choice”? The FDA notes: “The phrase ‘drug of choice,’ or any similar phrase or presentation, used in an advertisement or promotional labeling would make a superiority claim and, therefore, the advertisement or promotional labeling would require evidence to support that claim.” 

The phrase “drug of choice” may seem like a rhetorical device to a writer, but to a compliance officer, it represents a factual claim. Rhetorical phrases can often not stand out as facts because they are used widely and casually. Fortunately, GenAI can help check the presence of claims in text.

Using GenAI to spot factual claims. The development of AI fact-checking techniques has been motivated by the need to see where generative AI may have introduced misinformation or hallucinations. These techniques can be also applied to human written content.

The discipline of prompt engineering has developed a prompt that can check if statements make claims that should be factually verified.  The prompt is known as the “Fact Check List Pattern.”  A team at Vanderbilt University describes the pattern as a way to “generate a set of facts that are contained in the output.” They note: “The user may have expertise in some topics related to the question but not others. The fact check list can be tailored to topics that the user is not as experienced in or where there is the most risk.” They add: “The Fact Check List pattern should be employed whenever users are not experts in the domain for which they are generating output.”  

The fact check list pattern helps writers identify risky claims, especially ones about issues for which they aren’t experts.

The fact check list pattern is implemented in a commercial tool from the firm Writer. The firm states that its product “eliminates [the] risk of ‘plausible BS’ in highly regulated industries” and “ensures accuracy with fact checks on every claim.”

Screenshot of Writer screenWriter functionality evaluating claims in an ad image. Source: VentureBeat

Writer illustrates claim checking with a multimodal example, where a “vision LLM” assesses visual images such as pharmaceutical ads. The LLM can assess the text in the ad and determine if it is making a claim. 

GenAI’s role as a support tool

Generative AI doesn’t replace writers or compliance reviewers.  But it can help make the process smoother and faster for all by spotting issues early in the process and accelerating the development of compliant copy.

While GenAI won’t write compliant copy, it can be used to rewrite copy to make it more compliant. Writer advertises that their tool can allow users to transform copy and “rewrite in a way that’s consistent with an act” such as the Military Lending Act

While Regulatory Technology tools (RegTech) have been around for a few years now, we are in the early days of using GenAI to support compliance. Because of compliance’s importance, we may see options emerge targeting specific industries. 

Screenshot Federal Register formats menuFormats for Federal Register notices

It’s encouraging that regulators and their publishers, such as the Federal Register in the US, provide regulations in developer-friendly formats such as JSON or XML. The same is happening in the EU. This open access will encourage the development of more applications.

– Michael Andrews

The post Supporting content compliance using Generative AI appeared first on Story Needle.

Ney from UC Davis to LMU Munich

Leiter Reports: A Philosophy Blog - 21 maart 2024 - 10:51pm
Alyssa Ney (metaphysics, philosophy of science and physics, philoophy of mind), Professor of Philosophy at the University of California at Davis, has accepted the Chair in Metaphysics at LMU Munich, starting July 1, 2024. This will solidify LMU Munich's position... Brian Leiter

After pretending to listen, U of Kent senior management decides to close its successful philosophy department anyway

Leiter Reports: A Philosophy Blog - 21 maart 2024 - 4:13pm
Philosopher Simon Kirchin, formerly of Kent, asked me to share the following: Further to previous announcements of plans, the University of Kent has decided to close its Department of Philosophy. It will take no new students from now on. Existing... Brian Leiter

John K. Wilson is not a reliable supporter of academic freedom

Leiter Reports: A Philosophy Blog - 21 maart 2024 - 1:02pm
Years ago, I was impressed that Mr. Wilson (a freelance "academic freedom" expert [sic] as it were) was one of the few who spoke up on behalf of the attack on the free speech rights of Ward Churchill. Alas, it's... Brian Leiter

Notre Dame Philosophical Reviews

Leiter Reports: A Philosophy Blog - 20 maart 2024 - 3:06pm
On FB, Alex Guerrero noted the dramatic decline in the number of reviews NDPR publishes each year in the last few years, which generated a lively discussion. Here's what happened. Gary and Staci Gutting worked tirelessly (and without much support... Brian Leiter

On Berkeley, Kant and perspectivalism

Leiter Reports: A Philosophy Blog - 20 maart 2024 - 12:02pm
Paul Franks discusses at IAITV. Brian Leiter

Pagina's

Abonneren op Informatiebeheer  aggregator - Filosofie