The TIP blog: Creating an open space for Tech and Tech Policy discussion

The TIP blog: Creating an open space for Tech and Tech Policy discussion


It is with great pleasure that we launch the Technology, Internet and Policy (TIP) blog. Founded in 2022, TIP has sought to create a forum where academics, policymakers and civil society actors to come together to exchange ideas around topics as diverse as social media platforms, artificial intelligence, digital advertising, online chat groups, memes, influencers, online harm, content moderation, platform governance, foreign interference, new modes of digital organisation, incivility and much more! Our common interest has been in thinking through the implications of these digital phenomena for policymaking and regulation, and we have organised events that bring together a diverse community of stakeholders to begin discussing this fascinating topic. 

With this blog, we hope to open up the conversation even further, providing an open access platform on which our members – from within academia and beyond – can pose questions about the impacts of digital media on society. We hope the blog provides a forum where academics can present empirical research findings and theoretical reflections, and practitioners can illuminate regulatory or policy challenges. Our aim is to allow our members to share their interests and research, to exchange ideas, gain new skills and think about engagement beyond academia. 

The blog aims to provide a space for sharing and stimulating knowledge. With each blog no more than 1,400 words, we want to give members a platform to reflect on the most digital policy issues. If you have a blog idea, do not hesitate to get in touch! – whether you’re a member of TIP or not, we want to hear your ideas about different digital phenomena, social and methodological challenges, as well as policy dilemmas. 

To kick things off, as an organising committee, we have collaborated to identify a series of provocations about the way we’re thinking about digital technology and policy making.

Digital campaign regulation: inputs not outputs should be paramount – Kate Dommett

With 2024 widely branded the ‘year of elections’ this year we will confront new questions about the way digital campaigning should be regulated. Over recent years we’ve seen significant attention focused on misinformation and the need for content regulation. There have been calls for claims in online political advertising to be fact checked and for real-time content moderation of digital campaign messages. Whilst the outputs of a campaign are an important focus of regulatory attention, less thought has been given to the architecture behind digital campaigns and the degree to which campaigns operate on a level playing field. 

Recent research on digital and data-driven campaigning has shown persistent and growing inequalities between campaigners. Although digital campaigning is often less expensive to execute than its offline counterpart, the ability to fully leverage digital tools is skewed heavily towards larger, better financed organisations. Not only are these campaigns able to buy more adverts or secure the most competitive and highly priced digital channels, they are also able to invest in staff who can experiment, refine and optimise digital technologies. Over time the gap between the richest and poorest campaigns is growing, as progressive investment advantages those able to continually resource the digital campaign.

Whilst many regimes use financial regulations to promote free and fair elections and open competition, digital technology is disrupting prevailing systems of accountability. It is therefore essential for policymakers to reconsider the most effective way to regulate the use of digital technology in elections. In doing so, they should avoid the tendency to focus on outputs and content regulation, and instead consider how to create a more equitable and fair digital campaign in which all campaigners are able to access and deploy digital technologies.

Artificial intelligence regulation: let’s avoid AI exceptionalism, the role of existing regulation remains central – Declan McDowell-Naylor

Artificial intelligence (AI) is not new. The technologies we see today, such as neural networks, have their roots in the 20th century. Beneath all of the exciting developments and urgent governance initiatives, we must focus on the fact that what has fundamentally changed is the current availability of huge amounts of both computing power and data.

It is personal data that define the Information Commissioner’s (ICO) remit when it comes to AI regulation. In simple but clear terms, if personal data of UK data subjects is being processed in any part of an AI system, then it is regulated by the ICO and the existing laws that we enforce. Without exception, individuals’ information rights in relation to AI are already protected.   

Privacy scholar Daniel Solove recently defined AI exceptionalism as, “treating AI as so different and special that we fail to see how the privacy problems with AI are the same as existing privacy problems, just enhanced”. And while AI is not new, and it is not exceptional, it is incredibly important. As our Commissioner, John Edwards, said at this year’s IAPP Data Protection Intensive, AI is “the biggest question on my desk”. One area in particular is generative AI, where the ICO is working at pace to clarify how the law applies to these emergent AI models.

As a final word, we highly encourage you to submit to the ICO’s consultation series on generative AI, which you can find here: https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-series-on-generative-ai-and-data-protection/

Navigating the Challenges Ahead: Implementing the EU Artificial Intelligence Act – Giulia Sandri

The European Parliament’s impending plenary vote on the adoption of the AI Act marks a significant milestone, 1004 days after the European Commission first presented the legislative proposal in April 2021. This journey has been marked by 43 Technical meetings, 12 Shadows meetings in the European Parliament, 35 Technical Trilogue meetings between the Parliament, the Council of the EU, and the Commission, and 6 Political Trilogues, revealing a commendable efficiency in law-making for Brussels standards.

However, as the final text of the AI Act nears approval, attention turns to the challenges of effective implementation and enforcement. While the revised text incorporates changes aimed at enhancing implementation and enforcement procedures, such as the flexible principles outlined in Article 8 and the promotion of private-public partnerships through Article 53, several provisions raise concerns regarding implementation challenges.

One primary concern is the lack of legal certainty within the AI Act. Despite the European Commission’s aim to establish an ecosystem of trust, many definitions within Article 3 remain vague, procedures are incomplete, and legal overlaps with existing laws like GDPR and DSA present further complexity. Moreover, the quality of legal drafting has been questioned, potentially leading to increased workload for the European Court of Justice.

Another challenge lies in the governance structure outlined in the AI Act. The establishment and clarification of the AI Office’s role, tasked with extensive advisory responsibilities, has been launched, but capacity building will require time and resources. Ensuring consistency in enforcement across Member States, as witnessed with previous laws such as GDPR, poses a significant challenge. Additionally, the requirement for deployers of high-risk AI systems to conduct fundamental rights impact assessments adds complexity to their role, with final inclusion decisions resting with National competent authorities (National Notifying authorities and National Market Surveillance authorities). This decentralised approach to governance could lead to inconsistencies in enforcement activities and interpretations, posing challenges for organisations navigating the regulatory landscape.

Addressing these challenges requires a concerted effort to develop AI expertise across all relevant organisations. Government agencies, law firms, non-profit advocacy organisations, consumer rights organisations, and businesses must possess the necessary capabilities to effectively test, evaluate, and govern AI systems. Without this expertise, the AI Act risks being ineffective or even detrimental. The successful implementation and enforcement of the AI Act hinge on addressing issues of legal certainty, governance complexity, and enforcement consistency while prioritising the development of AI expertise.

Cross-Border Political Influence Requires Cross-Border Transparency Systems Amber Macintyre

Political parties outsource the design and delivery of their campaigns to a vast array of private firms. Whether it’s the provision of datasets on voters, advising the direction of campaign messaging, or providing channels for misinformation campaigns, these companies profit from our political processes. This influence industry is set to thrive in the 2024 election landscape. 

This industry is international – with private companies profiting from elections all over the world. In the last years, the Community Party of Nepal have engaged with PR consultants, Argentina’s Frente de Todos hired social media advertising firms, and various buzzer firms profited in Indonesia’s recent controversial elections. 

Furthermore, some of these firms work multi-nationally – ensuring they have political influence over not just the outcomes in one country (as a winning governing party might) – but in several countries across the world. For example, Bell Pottinger has worked in the United Kingdom, South Africa, Kuwait, and Zambia and the CT Group has been involved in campaigns in Australia, Italy, Malaysia, the United Arab Emirates, Sri Lanka, and Yemen. Both companies have been involved in misinformation scandals. The influence of these private firms is steeped in money – according to the Electoral Commission in the UK, as of 2010, Crosby Textor has made over GBP 8 million from working with the Conservative Party and has also made several thousand-pound donations to the party

Despite their substantial role in our political lives, these firms often operate in the shadows, evading democratic processes. Cross-border collaborations between national election observers are necessary to track trends as they happen – not just every three to five years – as well as identify and hold to account firms dodging regulations and profiting from the benefits of an international business landscape.

Synthetic Media Shows the Public’s Lack of Resilience to Disinformation: Are We Too Reliant on Content Moderation & Regulation to Fix Democratic Issues? – Liam McLoughlin

The series of elections within 2024 will certainly stress-test both the public’s and social media platform’s ability to detect and act upon mis- and disinformation. We’ve already seen multiple instances of media from across the globe created by generative AI with the intention to mislead voters. Notable incidents include robocalls using AI-generated audio of Joe Biden telling voters to stay home, to AI images of black voters alongside the presumed Republican candidate with the aim of suggesting a high level of popularity within black communities. Likewise, in the UK we have also seen deepfaked audio of Labour’s Keir Starmer abusing staffers during Labour’s Party Conference last year which went viral. The pattern to far seems to be that these synthetic media are only noticed or removed after they have already been viewed by their intended audience. In short – the damage is already done before action is taken.

The response, predictably, are calls for greater regulation of AI, and for further content moderation efforts to detect and remove AI-generated media. However, the likelihood is that efforts will be too slow to make much of an impact in the upcoming elections. The US House’s AI Task Force has only recently been set up, and recommendations can take months, if not years. At the same time while some platforms such as TikTok have banned AI Fakes of public figures, there is little to suggest they have found methods to accurately detect this type of content at scale. Indeed, the World Economic Forum’s Global Risks Report places misinformation and disinformation above terrorism in their short-term global risks – noting that regulatory efforts have failed to keep pace with technological advancements.

So, what’s to be done? One suggestion is to increase the resilience to disinformation amongst the public. Current research and reports highlights that the public (or certain sections within) are notoriously poor at detecting fake news or deepfakes. However,  there is evidence that media literacy can help raise detection rates.  This highlights that the responses to fix issues of technology and democracy cannot only be faced with regulation and content moderation – but should also seek to build resilience across our citizenry.

Add a Comment

Your email address will not be published.