permission Archives - News/Media Alliance https://www.newsmediaalliance.org/tag/permission/ Thu, 26 Oct 2023 14:16:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Global Principles on Artificial Intelligence (AI) https://www.newsmediaalliance.org/global-principles-on-artificial-intelligence-ai/ https://www.newsmediaalliance.org/global-principles-on-artificial-intelligence-ai/#respond Wed, 06 Sep 2023 11:55:17 +0000 https://www.newsmediaalliance.org/?p=14056 This document sets out principles that the undersigned publisher organisations believe should govern the development, deployment, and regulation of Artificial Intelligence systems and applications.

The post Global Principles on Artificial Intelligence (AI) appeared first on News/Media Alliance.

]]>

Credit: blackdovfx / iStock/Getty Images Plus via Getty Images

Download as a PDF

Introduction

AI developers and regulators have a unique opportunity to establish an ethical AI framework to boost innovation and create new business opportunities, while ensuring that AI develops in a way that is responsible and sustainable. To achieve this, it is essential that AI systems are trained on content and data which is accessed lawfully, including by appropriate prior authorisations obtained for the use of copyright protected works and other subject matter, and that the content and sources used to train the systems are clearly identified. This document sets out principles that the undersigned publisher organisations believe should govern the development, deployment, and regulation of Artificial Intelligence systems and applications. These principles cover issues related to intellectual property, transparency, accountability, quality and integrity, fairness, safety, design, and sustainable development.

The proliferation of AI Systems, especially Generative Artificial Intelligence (GAI), present a sea change in how we interact with and deploy technology and creative content. While AI technologies will provide substantial benefits to the public, content creators, businesses, and society at large, they also pose risks for the sustainability of the creative industries, the public’s trust in knowledge, journalism, and science, and the health of our democracies.

We, the undersigned organisations, fully embrace the opportunities AI will bring to our sector and call for the responsible development and deployment of AI systems and applications. We strongly believe that these new tools will facilitate innovative breakthroughs when developed in accordance with established principles and laws that protect publishers’ intellectual property (IP), valuable brands, trusted consumer relationships, and investments. The indiscriminate appropriation of our intellectual property by AI systems is unethical, harmful, and an infringement of our protected rights.

Our organisations represent thousands of creative professionals around the world, including news, magazine, and book publishers and the academic publishing industry such as learned societies and university presses. Our members invest considerable time and resources creating high-quality content that keeps our communities informed, entertained, and engaged. These principles – applying to the use of our content to train and deploy AI systems, as they are understood and used today – are aimed at ensuring our continued ability to innovate, create and disseminate such content, while facilitating the responsible development of trustworthy AI systems.

Intellectual Property

1) Developers, operators, and deployers of AI systems must respect intellectual property rights, which protect the rights holders’ investments in original content. These rights include all applicable copyright, ancillary rights, and other legal protections, as well as contractual restrictions or limitations imposed by rightsholders on the access to and use of their content. Therefore, developers, operators, and deployers of AI systems—as well as legislators, regulators, and other parties involved in drafting laws and policies regulating AI—must respect the value of creators’ and owners’ proprietary content in order to protect the livelihoods of creators and rightsholders.

2) Publishers are entitled to negotiate for and receive adequate remuneration for use of their IP. AI system developers, operators, and deployers should not be crawling, ingesting, or using our proprietary creative content without express authorisation. Use of intellectual property by AI systems for training, surfacing, or synthesising is usually expressly prohibited in online terms and conditions of the rightsholders, and not covered by pre-existing licensing agreements. Where developers have been permitted to crawl content for one purpose (for example, indexing for search), they must seek express authorisation for use of the IP for other purposes, such as inclusion within LLMs. These agreements should also account for harms that AI systems may cause, or have already caused, to creators, owners, and the public.

3) Copyright and ancillary rights protect content creators and owners from the unlicensed use of their content. Like all other uses of protected works, use of protected works in AI systems is subject to compliance with the relevant laws concerning copyrights, ancillary rights, and permissions within protocols. To ensure that access to content for use in AI systems is lawful, including through appropriate licenses and permissions obtained from relevant rightsholders, it is essential that rightsholders are able effectively to enforce their rights, and where applicable, require attribution and remuneration.

4) Existing markets for licensing creators’ and rightsholders’ content should be recognised. Valuing publishers’ legitimate IP interests need not impede AI innovation because frameworks already exist to permit use in return for payment, including through licensing. We encourage efficient licensing models that can facilitate training of trustworthy and high-quality AI systems

Transparency

5) AI systems should provide granular transparency to creators, rightsholders, and users. It is essential that strong regulations are put in place to require developers of AI systems to keep detailed records of publisher works and associated metadata, alongside the legal basis on which they were accessed, and to make this information available to the extent necessary for publishers to enforce their rights where their content is included in training datasets. The obligation to keep accurate records should go back to the start of the AI development to provide a full chain of use regardless of the jurisdiction in which the training or testing may have taken place. Failure to keep detailed records should give rise to a presumption of use of the data in question. When datasets or applications developed by non-profit, research, or educational third parties are used to power commercial AI systems, this must be clearly disclosed so that publishers can enforce their rights. Where developers use AI tools as a component into the process of generating knowledge from knowledge, there should be transparency on the application of these tools, including appropriate and clear accountability and provenance mechanisms, as well as clear attribution where appropriate in accordance with the terms and conditions of the publishers of the original content. Without limiting and subject to paragraphs 6 and 9, AI developers should work with publishers to develop mutually acceptable attribution and navigation standards and formats. Users should also be provided with comprehensible information about how such systems operate to make judgments about system and output quality and trustworthiness.

Accountability

6) Providers and deployers of AI systems should cooperate to ensure accountability for system outputs. AI systems pose risks for competition and public trust in the quality and accuracy of informational and scientific content. This can be compounded by AI systems generating content that improperly attributes false information to publishers. Deployers of AI systems providing informational or scientific content should provide all essential and relevant information to ensure accountability and should not be shielded from liability for their outputs, including through limited liability regimes and safe harbours.

Quality and Integrity

7) Ensuring quality and integrity is fundamental to establishing trust in the application of AI tools and services. These values should be at the heart of the AI lifecycle, from the design and building of algorithms, to inputs used to train AI tools and services, to those used in the  practical application of AI. A fundamental principle of computing is that a process can only be as good or unbiased as the input used to teach the system (rubbish-in-rubbish-out). AI developers and deployers should recognise that publishers are an invaluable part of their supply chain, generating high-quality content for training, and also for surfacing and synthesising. Use of high-quality content upstream will contribute to high-quality outputs for downstream users.

Fairness

8) AI systems should not create, or risk creating, unfair market or competition outcomes. AI systems should be designed, trained, deployed, and used in a way that is compliant with the law, including competition laws and principles. Developers and deployers should also be required to ensure that AI models are not used for anti-competitive purposes. The deployment of AI systems by very large online platforms must not be used to entrench their market power, facilitate abuses of dominance, or exclude rivals from the marketplace. Platforms must adhere to the concept of non-discrimination when it comes to publishers exercising their right to choose how their content is used.

Safety

9) AI systems should be trustworthy. AI systems and models should be designed to promote trusted and reliable sources of information produced according to the same professional standards that apply to publishers and media companies. AI developers and deployers must use best efforts to ensure that AI generated content is accurate, correct and complete. Importantly, AI systems must ensure that original works are not misrepresented. This is necessary to preserve the value and integrity of original works, and to maintain public trust.

10) AI systems should be safe and address privacy risks. AI systems and models in particular should be designed to respect the privacy of users who interact with them. Collection and use of personal data in AI system design, training, and use should be lawful with full disclosure to users in an easily understandable manner. Systems should not reinforce biases or facilitate discrimination.

By Design

11) These principles should be incorporated by design into all AI systems, including general purpose AI systems, foundation models, and GAI systems. They should be significant elements of the design, and not considered as an afterthought or a minor concern to be addressed when convenient or when a third party brings a claim.

Sustainable Development

12) The multi-disciplinary nature of AI systems ideally positions them to address areas of global concern. AI systems bear the promise to benefit all humans, including future generations, but only to the extent they are aligned to human values and operate in accordance with global laws. Long-term funding and other incentives for suppliers of high-quality input data can help to align systems with societal aims and extract the most important, up-to-date, and actionable knowledge.

Endorsing Organizations*

(Click image to expand)

*Additional organizations to endorse the Principles following publication include: AMI – Asociación de Medios de Información (Spanish News Media Association); APImprensa, the Portuguese Press Editors and Publishers Association; Association of Online Publishers (AOP) (UK); ARI, Asociación de Revistas (Spanish Magazine Media Association); TU – Swedish Media Publishers Association

Full list of organizations signing onto the Global AI Principles:

  • AMI – Colombian News Media Association
  • AMI – Asociación de Medios de Información (Spanish News Media Association)
  • APImprensa, the Portuguese Press Editors and Publishers Association
  • Asociación de Entidades Periodísticas Argentinas (Adepa)
  • Association of Learned & Professional Society Publishers
  • Association of Online Publishers (AOP) (UK)
  • Associação Nacional de Jornais (Brazilian Newspaper Association) (ANJ)
  • Czech Publishers’ Association
  • Danish Media Association
  • Digital Content Next
  • European Magazine Media Association
  • European Newspaper Publishers’ Association
  • European Publishers Council
  • FIPP
  • Grupo de Diarios América
  • Inter American Press Association
  • Korean Association of Newspapers
  • Magyar Lapkiadók Egyesülete (Hungarian Publishers’ Association)
  • NDP Nieuwsmedia
  • News/Media Alliance
  • News Media Association
  • News Media Canada
  • News Media Europe
  • News Media Finland
  • News Publishers’ Association
  • Nihon Shinbun Kyokai (The Japan Newspaper Publishers & Editors Association)
  • Professional Publishers Association
  • ARI, Asociación de Revistas (Spanish Magazine Media Association)
  • STM
  • TU – Swedish Media Publishers Association
  • World Association of News Publishers (WAN-IFRA)

Related resources:

Joint G7 letter on development of global AI principles (News/Media Alliance, European Publishers Council, and Digital Content Next)

Back to top

The post Global Principles on Artificial Intelligence (AI) appeared first on News/Media Alliance.

]]>
https://www.newsmediaalliance.org/global-principles-on-artificial-intelligence-ai/feed/ 0
News/Media Alliance AI Principles https://www.newsmediaalliance.org/ai-principles/ https://www.newsmediaalliance.org/ai-principles/#respond Thu, 20 Apr 2023 14:36:34 +0000 https://www.newsmediaalliance.org/?p=13607 This document highlights the overarching principles that must guide the development and use of GAI systems as well as the policies and regulations governing them.

The post News/Media Alliance AI Principles appeared first on News/Media Alliance.

]]>

Credit: metamorworks / iStock/Getty Images Plus via Getty Images

Download as a PDF

The News/Media Alliance (NMA) represents the most trusted publishers in print and digital media based in the United States, from small, local outlets to national and international publications read around the world. Every day, these publishers invest in producing high-quality creative content that is engaging, informative, trustworthy, accurate and reliable. In doing so, they not only make significant economic contributions, but they also play a crucial role in educating, upskilling and informing our communities, building our democracy and economy, and furthering America’s economic, security and political interests abroad.

Introduction

As generative artificial intelligence (GAI) technologies become more prevalent, our membership believes these new tools must only be developed respecting journalistic and creative content, in accordance with principles that protect publishers’ intellectual property (IP), brands, reader relationships, and investments. The unlicensed use of content created by our companies and journalists by GAI systems is an intellectual property infringement: GAI systems are using proprietary content without permission. It’s also critical to acknowledge the societal risks associated with the proliferation of mis- and dis-information through GAI, which high-quality, original content, produced by skilled humans and trusted brands, can help to combat.

GAI developers and deployers must negotiate with publishers for the right to use their content in any of the following manners:

  • Training: Including publishers’ content in datasets and using it for GAI system training and testing.
  • Surfacing: The serving of publishers’ content in response to user inputs, possibly including a cover note generated by the GAI system of what is contained in the surfaced content.
  • Synthesizing: Summaries, explanations, analyses etc. of source content in response to a query.
This document highlights the overarching principles that must guide the development and use of GAI systems as well as the policies and regulations governing them. These principles are founded on our understanding of these systems and technologies as they are currently used – and may therefore be amended as these technologies and uses develop – and apply equally to all publisher content, whether in text, image, audiovisual or any other format.

AI Principles

Intellectual Property

Developers and deployers of GAI must respect creators’ rights to their content. These rights include copyright and all other legal protections afforded to content creators and owners, as well as contractual restrictions or limitations imposed by publishers for the access and use of their content (including through their on-line terms of service). Developers and deployers of GAI systems—as well as legislators, regulators and other parties involved in drafting laws and policies regarding GAI—must maintain an unwavering respect for these rights and recognize the value of creators’ proprietary content. GAI developers and deployers should not use publisher IP without permission, and publishers should have the right to negotiate for fair compensation for use of their IP by these developers. Professional journalism is particularly valuable due to its reliability, accuracy, coherency and timeliness, enhancing GAI system outputs and improving perceptions of system quality. Absent permission and specific licenses, GAI systems are not simply using publishers’ content, they are stealing it.

Use of publishers’ IP requires explicit permission. Use of publisher content by GAI systems for training, surfacing and synthesizing is not authorized by most publishers’ terms and conditions, and authorization for search should not be construed as an authorization for uses such as training GAI systems or displaying more content than contemplated for or as used in traditional search.  GAI system developers and deployers should not be crawling, ingesting or using publishers’ proprietary content without express authorization; requiring publishers to opt out is not acceptable. Negotiating written, formal agreements is therefore necessary.  Industry standards should be developed to allow for automatic detection of permissions that distinguish among potential uses of crawled or scraped content.  These standards and usage agreements can also address other issues such as attribution, monetization, responsibility, and derivative uses.

Compensation agreements must account for harms GAI systems may cause publishers and the public. GAI system surfacing and synthesizing are providing much more proprietary content and information from the original sources than traditional search and often provide little or no attribution, and will exacerbate the growing trend toward zero-click, reducing or even eliminating value for publishers. GAI systems use publishers’ proprietary content to generate outputs that may replace their role in the consumer/information provider relationship. In addition to reducing traffic, this harms publisher brands that have taken years, decades, or even centuries to build.

Copyright laws must protect, not harm, content creators. The fair use doctrine does not justify the unauthorized use of publisher content, archives and databases for and by GAI systems.  Any previous or existing use of such content without express permission is a violation of copyright law. The Section 1201 triennial rulemaking process should not be used to allow for the bypassing of content protections for GAI development purposes. Exceptions to copyright protections for text and data mining (TDM) should be narrowly tailored to limited nonprofit and research purposes that do not damage publishers or become pathways for unauthorized uses that would otherwise require permission.  The U.S. also has made international law commitments in this area that protect its IP-based businesses across multiple sectors and these must be upheld in its approach to AI.

There is an existing market for licensing publishers’ news content. Valuing publishers’ legitimate IP interests need not impede GAI innovation because compensation frameworks (for example, licensing) already exist to permit use in return for payment. GAI innovation should not come at the expense of publishers, but rather at the expense of developers and deployers.  Publishers encourage the use of efficient ways to license through standard-setting organizations that can facilitate efficient training of GAI systems.

Transparency

GAI systems should be transparent to publishers. Publishers have a right to know who copied our content and what they are using it for. We call for strong regulations and policies imposing transparency requirements to the extent necessary for publishers to enforce their rights. Publishers have a legitimate interest in determining what content of theirs has been and is used in GAI systems. Using datasets or applications developed by non-profit, research, or educational third parties to power commercial GAI systems must be clearly disclosed and not used to evade transparency obligations or copyright liability.

GAI systems should be transparent to users. Direct relationships between users and publishers are critical for the sustainability of the news media and informational content sector. Surfaced and synthesized outputs should connect, not disintermediate, users with publishers. Members of the public should know the source of information that may affect them.  Generative outputs should include clear and prominent attributions in a way that identifies to users the original sources of the output and encourages users to easily and directly navigate to those products, as well as to let them know when content is generated by GAI. Transparency into GAI systems can also help prevent misuse and the spread of mis- and dis-information. Similarly, it enables the evaluation of GAI systems for unintended bias to avoid discriminatory outcomes.

Accountability

Deployers of GAI systems should be held accountable for system outputs. GAI systems pose risks for competition, the integrity of news and creative content, and for public trust in the journalistic and creative content. This is aggravated by the ability of AI applications to devalue publisher brands by generating content that attributes false or inaccurate information to publishers who have not published the information and who have processes in place to prevent such publication in the first place. Accordingly, deployers of GAI systems should not be shielded from liability for their outputs—to do so would be to provide deployers of GAI systems with an unfair advantage against which traditional publishers cannot compete and increase the danger to the public and institutions from the unchecked power of this technology.

Fairness

GAI systems should not create, or risk creating, unfair market or competition outcomes. Regulators should be attuned to ensuring GAI systems are designed, deployed, and used in a way that is compliant with competition laws and principles. Developers and deployers should also use their best efforts to ensure that GAI models are not used for anti-competitive purposes. The use of publisher content for GAI purposes without express permission from content owners by firms that have market power in online content distribution should be considered evidence of a violation of competition laws.  Regulators should be vigilant for other anti-competitive uses of GAI systems.

Safety

GAI systems should be safe and avoid privacy risks. GAI systems, including GAI models, should be designed to respect the privacy of users who interact with them. Early indications are that GAI tools will exacerbate trends towards digital platforms collecting large volumes of user data. The collection and use of personal data in GAI system design, training and use should be minimal and should be disclosed to users in an easily understandable manner so that users can make informed judgments about how their data is used in exchange for the GAI service. Users should be informed about, and should have the right to prevent, the use of their interactions with GAI systems for the purposes of training or collection of personal data.  Systems should also be designed in a way that means paywalled and otherwise protected content cannot be exposed (including but not limited to, for example, by membership inference methods).

Design

All of the principles discussed above should be incorporated in the very design of GAI systems, as significant elements of the design, and not considered as an afterthought or a minor concern to be addressed when convenient or when a third party brings a claim.

Back to top

 

The post News/Media Alliance AI Principles appeared first on News/Media Alliance.

]]>
https://www.newsmediaalliance.org/ai-principles/feed/ 0
Q&A: Navigating Copyright Compliance Issues for News Publishers https://www.newsmediaalliance.org/qa-navigating-copyright-compliance-issues-for-news-publishers/ Wed, 15 Feb 2023 15:00:38 +0000 https://www.newsmediaalliance.org/?p=13551 News/Media Alliance Executive Vice President & General Counsel Danielle Coffey shared with Editor & Publisher Magazine ways news publishers can navigate complex copyright compliance issues.

The post Q&A: Navigating Copyright Compliance Issues for News Publishers appeared first on News/Media Alliance.

]]>

Rawpixel Ltd / iStock/Getty Images Plus via Getty Images

The below Q&A is from an interview with News/Media Alliance Executive Vice President & General Counsel, Danielle Coffey, published in the February 2023 edition of Editor & Publisher Magazine. The original article is available here.

Q: In what ways have copyright laws and compliance become more complicated in the digital, social media age?

A: There are two sides to this question. First, digital transformation has led to a proliferation in the availability of news sources and content for journalists and publishers, as well as the number of middlemen that publishers have to deal with regularly. As a result, publishers must pay more attention to due diligence — ensuring that they understand the relationships between the original copyright owner and any platforms or middlemen they may use and that they have the necessary rights to any content they publish.

For example, many publishers have recently struggled with the legal uncertainty around using embedded content on Instagram without explicit authorization from the original poster. Related to this, the increased availability of photos and videos taken by amateurs during news events — especially fast-moving ones where time is of the essence — raises important questions on how to acquire the necessary licenses while remaining on top of the newsworthy situation. These conditions require publishers to pay particular attention to ensuring they comply with applicable copyright laws.

Second, the digital age has also made it more complicated for publishers to protect their content against unauthorized uses. These uses range from the overly-expansive use of news content by search and social media platforms, which the Alliance has advocated against at length, to the use of news content for AI training purposes, to the unlawful posting of full-text articles on services often based abroad, often within minutes of publication, threatening the original publishers’ ability to benefit from subscription and digital advertising revenues. These uses are often systematic, and the infringers are hard to detect and locate, making enforcing copyright laws difficult, time-consuming and expensive.

Q: How have U.S. copyright laws and protections been challenged in the courts in recent years? Are there particular cases that news publishers should be familiar with — or concerned about?

A: There have been a few cases in the last five years with implications for news publishers, with some of the most important being Fox News Network v. TVEyes (2018), Goldman v. Breitbart (2018), and Warhol v. Goldsmith (ongoing).

TVEyes concerned a service that copied broadcasts from over 1,400 TV and radio stations and allowed its subscribers to search, download, watch and share clips of these programs. The District Court had found that both the search function and the watch function were fair use. Fox appealed the decision as it related to the watch function, and the Circuit Court reversed, finding the fourth fair use factor — related to potential market harm — decisive. This was a key victory for rightsholders, with the Court correctly noting that the market effect on the copyright owner should be a major factor in fair use analysis and giving leverage to the argument that even the use of clips of protected content can hurt the copyright owner and should be subject to serious scrutiny.

Meanwhile, Goldman focused on publishers’ ability to embed third-party content from social media. Specifically, the defendants had embedded a tweet with the plaintiff’s photograph of Tom Brady without the original poster’s authorization. Rejecting the Ninth Circuit’s “server test,” the Second Circuit agreed with Goldman, finding that the embedding violated his exclusive rights despite the image being hosted on a third-party server. Similar questions have since arisen in other cases, often concerning Instagram — which recently introduced an option to opt-out of embedding following discussions with the News/Media Alliance — with one publisher settling a case brought by a photographer in New York and Instagram managing to squash a class-action lawsuit against itself related to its embedding function in California. This remains an important debate for publishers to follow.

Lastly, we’re also eagerly awaiting the Supreme Court’s decision on Warhol, which concerns Andy Warhol’s paintings of Prince, based on a portrait taken by photographer Lynn Goldsmith for Vanity Fair before Prince became famous. Following Prince’s death, Goldsmith discovered that Warhol had made a whole series of paintings based on the photo without her permission. The case raises important questions about what amounts to “transformative use” within fair use analysis. The Alliance submitted an amicus brief in support of neither party, outlining some of the delicate considerations the case raises, including how an overly broad definition of “transformative use” could threaten the copied work right. The Court heard oral arguments in the case this past October, with the decision due this spring.

Q: Copyright was at the heart of the news publisher v. Big Tech negotiations in Europe. Can you share a synopsis of those negotiations and where things stand in Europe? Also, help us wrap some context around what’s happened in Europe and what it may mean for news publishers here in the States.

A: The European Union’s adoption in 2019 of its Directive on Copyright in the Digital Single Market, including Article 15, which requires member states to create a so-called “Publishers’ Right,” was a landmark development. It acknowledged the inability of publishers to effectively protect their content online against unauthorized uses by online platforms and provided publishers with an independent right to do so. In France, the first country to implement Article 15 in national law, publishers soon encountered problems negotiating with Google. In 2019, soon after the law’s adoption, Google refused to pay publishers and indicated it would stop showing excerpts in search results unless a publisher waived its right to compensation. Following a challenge by French publishers, the French competition authority issued an interim ruling, finding that Google likely engaged in anticompetitive behavior and required Google to engage in negotiations. While Google engaged in negotiations and reached some deals after the decision, the French competition watchdog issued a €500 million fine against Google a year later for failing to comply with the orders on conducting such negotiations. Following this fine, Google proposed commitments in early 2022 to change its practices and to resolve the investigation into its anticompetitive practices. The competition authority accepted Google’s commitments in June, with Google expected to negotiate with a broader selection of publishers in good faith.

From the publishers’ viewpoint in the United States, Europe established a precedent that Australia improved upon. The Alliance has embraced a model similar to Australia based on competition law, where the anticompetitive conduct and power of the monopolies are more squarely addressed. The Journalism Competition and Preservation Act, considered by Congress during the last session, would have adopted a similar approach in the U.S. to the Australian model, while Canada, the UK, and India are also considering similar approaches. All of the approaches attempt to address the disparities in the digital ecosystem that allow dominant online platforms not only to set the rules of the game but to reap the vast majority of rewards. Publishers need more leverage to negotiate fairer terms and compensation that help preserve high-quality journalism for future generations.

Q: Is the News/Media Alliance engaged in lobbying Congress for any changes to copyright law or protections granted to news publishers?

A: In comments submitted with the Copyright Office, the Alliance recommended that Congress explore a sui generis, or quasi-property right, that would recognize an exchange of value outside of the fair use factors but within copyright law. We are also actively advocating for changes that would allow publishers to register dynamic web content, which is currently impossible. This would significantly affect the publishers’ ability to register and protect their content online effectively.

Q: Who, typically or ideally, should be concerned with or tasked with copyright compliance at the news publisher?

A: This depends a lot on the type and size of the publication, with no easy one-size-fits-all answer. Some large publishers may have whole teams responsible for ensuring compliance with various laws, including copyright, while smaller outlets may rely on an individual person, such as an image editor. The most important thing is that whoever is responsible for compliance takes their job seriously, has the time and resources to do so properly, and has the authority to affect publishing decisions.

The post Q&A: Navigating Copyright Compliance Issues for News Publishers appeared first on News/Media Alliance.

]]>
Apple’s Latest Privacy Announcement Will Impact a Key Tool in News Publishers’ Audience Engagement Toolbox: Email Newsletters https://www.newsmediaalliance.org/apples-latest-privacy-announcement-will-impact-a-key-tool-in-news-publishers-audience-engagement-toolbox-email-newsletters/ https://www.newsmediaalliance.org/apples-latest-privacy-announcement-will-impact-a-key-tool-in-news-publishers-audience-engagement-toolbox-email-newsletters/#respond Mon, 14 Jun 2021 13:00:48 +0000 https://www.newsmediaalliance.org/?p=11692 The June 2021 Worldwide Developers Conference (WWDC) keynote speech, which introduced the company’s iOS 15, featured "one more thing" that should have many news organizations paying attention.

The post Apple’s Latest Privacy Announcement Will Impact a Key Tool in News Publishers’ Audience Engagement Toolbox: Email Newsletters appeared first on News/Media Alliance.

]]>

marchmeena29 / iStock/Getty Images Plus via Getty Images

Steve Jobs was famous for announcing “one more thing” toward the end of his Apple product announcements. Typically, that “one more thing” was a new device that would reshape consumers’ expectations of what technology could do for them. (Sometimes that one thing was a price point – how much would the future cost?) In more recent years, the tone of Apple’s presentations has shifted, as the company’s focus has moved away from new devices to services and features designed to keep users happy. But the June 2021 Worldwide Developers Conference (WWDC) keynote speech, which introduced the company’s iOS 15, did feature one more thing that should have many news organizations paying attention.

The iOS announcement highlighted how the upcoming operating system can integrate into users’ lives in a post-pandemic world and continued Apple’s theme of supporting user privacy. As SVP of engineering Craig Federighi said, “We don’t think you should have to make a tradeoff between great features and privacy. We believe you deserve both.” Much of the pre-announcement conversation focused on App Transparency Tracking (ATT), which provides individual users with the ability to opt out of having their app behavior tracked and shared with advertisers. The feature is seen as a particular challenge to Facebook, which earns the bulk of its massive revenues from advertising (due to its ability to provide advertisers with valuable metrics about app behavior). Following its release in February, one study found that “U.S. users choose to opt out of tracking 96 percent of the time” when prompted. Many news organizations, rightly concerned about Facebook’s power in the advertising market, saw ATT as an attack on the platforms’ dominance. The hope was that advertisers would shift budgets away from Facebook (and to channels that used their own first-party data, such as news publications and apps) once it lost some power to target and track users.

However, the iOS 15 announcement featured “one more thing” that has some in the news industry concerned: an update called Mail Privacy Protection. As CNN described, “The email app on Apple devices will now hide users’ IP addresses and their location, so companies sending emails can’t link that information to users’ other online activity. Additionally, senders can’t see if or when users open their email.” Specifically, Mail Privacy Protection will not allow email senders to track the pixel that is used to determine open rates. Email senders – including news organizations – will lose the powerful engagement metrics on how many of their promotions, offers, and importantly, newsletters, are being opened.

In his analysis of the announcement, Nieman Lab’s Joshua Benton points out this change is substantial – “This is Apple Mail, the dominant platform for email in the U.S. and elsewhere. According to the most recent market-share numbers from Litmus, for May 2021, 93.5% of all email opens on mobile come in Apple Mail on iPhones or iPads. On desktop, Apple Mail on Mac in responsible for 58.4% of all email opens.” Benton’s piece reviews the ongoing conversation about the changes and points out, “Open rates will now officially be useless,” and that small publishers, especially individual newsletters, have the most to lose. In a tweet, Matt Taylor of the Financial Times points out that the change will “hurt small pubs the most,” and that “for those with no audience it might stop them from ever succeeding.”

Platformer writer Casey Newton – who rounded up multiple tech and news industry responses to the announcement – agreed with Benton’s conclusion that without the ability to track email opens, publishers will “adjust, somehow.” In Newton’s newsletter, he shared that he wasn’t “sure that people doing email-based journalism have all that much to worry about from the shift.” He cites independent newsletter publisher Alex Kantrowitz, whose ad-supported newsletter “was sold out for the first half of the year, thanks to a premium audience he identified not by pixel-based tracking but by a good old-fashioned reader survey.” As Renee Cassard, chief research officer at the media conglomerate Omnicom pointed out to Digiday, “The marketplace has sort of realized that there are limits to behavioral data.” Beyond straightforward behavior tracking, publishers can leverage research and data-generation tools to understand not only what their readers have done, but who they are and what they want. This data would be of high value for internal product development as well as advertiser needs.

Kantrowitz’s perspective may be the most helpful for news publishers that send newsletters and are concerned about the changes. But as with any alternative, it is not practical to view it as a magic bullet solution to preserving long-term relationships – in fact, a simple open rate calculation was never an indication of that, either. It has just been the key metric by which advertisers value newsletter placements (until now). The point is that there are many ways to build relationships with readers, and as the industry shifts toward a more consumer-needs driven model, newsletters should be seen as tools for promoting engagement and building habitual, loyal, paying readers; not viewed solely for their potential to attract advertisers. Eventually, there will be a new “open rate.” But as these indicators evolve, continuing to meet readers where they are and provide high-value products will best position your organization for success.

The post Apple’s Latest Privacy Announcement Will Impact a Key Tool in News Publishers’ Audience Engagement Toolbox: Email Newsletters appeared first on News/Media Alliance.

]]>
https://www.newsmediaalliance.org/apples-latest-privacy-announcement-will-impact-a-key-tool-in-news-publishers-audience-engagement-toolbox-email-newsletters/feed/ 0