Newsletter, April 2023: TIP at #PSA23 Highlights; Save the date, AGM; The AI zeitgeist continues; UNESCO’s Internet for Trust meeting & more

Newsletter, April 2023: TIP at #PSA23 Highlights; Save the date, AGM; The AI zeitgeist continues; UNESCO’s Internet for Trust meeting & more

Inside

  • TIP at PSA: highlights
  • Save the AGM date: 2nd June
  • The AI zeitgeist continues
  • UNESCO’s Internet for Trust meeting
  • More content moderation coming near you
  • Other stories we’re reading
  • Jobs & opportunities

Hello TIPers,

 
It was so lovely to put faces to names last week! Thank you all for joining us, chatting with us and sharing your ideas. I think it is safe to say the TIP panels at PSA were a great success, and we’ve come away feeling invigorated and with new plans to continue growing!

Thank you to those who responded to last week’s reminder to share your news. We’ve added them to the newsletter in purple to draw attention to them.
 
Our top stories this month include a longer piece on generative AI to unpack some important recent developments, and shorter pieces on UNESCO’s Internet for Trust meeting and updates in content moderation in the EU.
 
On a small sidenote, I am leaving the UK for a few months on research leave, so the newsletter will be taken over by the wonderful Kate and Liam. But fret not, I am still very much in reach via TIP’s Twitter or email. Speaking of, if you would like to highlight your own or colleagues’ work through the TIP newsletter or promote your own events, let us know by emailing us here, or message us on Twitter!
 
See you again soon,
 
Sarah @SarahLedx


TIP NEWS

73rd PSA Annual Conference

Debrief

We cannot stress enough how fantastic it was to meet you all at Liverpool (and online!), so much so that we created a collage of Twitter images to immortalise the moment

Our meet-up on Monday morning was a great start to the conference, we took down lots of notes on the ideas you brought up, and we hope to be updating you about new activities soon.

Some of these ideas include:

  • Setting up a Best Paper or a Policy Impact award
  • Organising a joint conference with another PSA group
  • Increasing our online presence through activities such as online seminars, working paper calls (which we have already started!) and the likes
  • Setting up a mentorship or buddy system for ECRs (drop us a line if you would be interested)
  • Sending a reminder to send TIP your news a week before the newsletter goes out (I hope you received it last week!)

The rest of PSA was jam-packed with TIP sessions, with four panels across two days, 12-papers delivered, and plenty of rich discussion! Thanks again for your presentations, participation and enthusiasm.

If there are any further ideas that you would like to re-iterate with us, do not hesitate to send us an email.


Save the date!

2nd June 2023


The Technology, Internet & Policy group (hey, that’s us!) will be holding its formal Annual General Meeting online from 1-2pm (London time, GMT +1), and you are all very cordially invited. We know that not everyone could make it to Liverpool for PSA, so holding it online will allow more of you to participate, share ideas, find out about our activities and how you can get involved and helping us run TIP in the coming year.

We will provide more information on the meeting leading up to the event. For the time being, just whip out your calendars and save that date.

The meeting will be for members only, so if you know anyone who might be interested in joining, make sure to point them towards our membership link (remember, it’s free!).


Latest News, Research, & Opportunities

AI’s the word

The race for LLM and chatbot market domination


Unless you have been living under a rock this past month, you will have noticed that the whole world has been raving about AI chatbots, specifically OpenAI’s ChatGPT and its bigger better younger sibling, GPT-4. As I’m sure you have already been made aware of its features and the frenzy it has generated, I will spare you the details.

Instead, I will share some of the interesting developments that have emerged from the hubbub.

First up, OpenAI is becoming less and less open. Originally a project founded in 2015 as a non-profit by tech elite including Elon Musk (who resigned in 2018), it promised to advance digital intelligence “unconstrained by financial return”. But in 2019 the company set up a profit sector, and in 2020 Microsoft invested $1 billion in GPT-3 for exclusive licensing, closing its initial ‘open’ chapter. With last month’s release of GPT-4, OpenAI has been more secretive than ever on the model’s hardware, training compute, parameters and data (although it is almost certainly using the same dataset as ChatGPT). This decision was made amid the staggering market reaction to ChatGPT’s success and competitors’ race to have a piece of the AI cake. A few hungry contenders include Flamingo at DeepMind, Big Bard at Google, Claude at Anthropic (already used by Notion and Quora), LLaMA at Meta and BLOOM at Hugging Face (open-source), to name a few. China’s tech giant Baidu has also released its own chatbot, Ernie Bot, but the model has performed poorly and despite a waiting list of over 120k companies, access has been suspended temporarily (ChatGPT is banned in China).

OpenAI’s other main argument to close access to the mechanisms GPT-4 is based on is to make it safer. However, without the scrutiny afforded to open source projects, some argue this would have the opposite effect. In other words, a closed chatbot could make its users more vulnerable. Left to its own devices, OpenAI is less likely to anticipate or prevent the numerous threats to GPT-4 users’ safety and subsequently, threats to their entourage. Italy has been the first Western country to block ChatGPT while it investigates whether it complies with GDPR. This brings us to our second point.
 
Ethics is regrettably not at the forefront of the AI crusade. Microsoft recently let go of its entire ethics and society team within its responsible AI team, as part of a 10,000 employee lay off and restructuring effort. Microsoft still maintains an Office of Responsible AI, but the strategic decision reveals where the company’s priorities lie. Tech firms developing AI claim they want ethical products, but ethics AI employees voice nothing but concern for their futures, in addition to regularly suffering from burnout. This is ideally where regulatory bodies should step in and impose mandatory ethics and transparency clearance for AI development and commercialisation. Alarm bells should be ringing for the political elite, particularly after more than 500 tech and AI experts (again including Elon Musk and Apple co-founder Steve Wozniak) signed an open letter to pause AI training for at least 6 months, due to the risks it causes to ‘society and humanity’. But this has been ignored by the US government and rejected by Google’s former CEO as ‘benefitting China’.
 
There is not enough incentive for governments to intervene because they are likely going to benefit from it. The most explicit example is India, which has declared it has no intention to regulate AI. Part of the reason behind this is to foment AI R&D in India, in order to catch up with the AI goldrush. But it has also been interpreted as an opportunity for AI companies to collaborate with governments (see also UK and Iceland), as well as incorporate it into governance and policy-making procedures despite its known biases in against women and minorities. But hold your horses. There are silver linings. Large language models (like GPT-4) can be trained to self-correct biases if told to do so. At least this is what Anthropic found. Whether this method is effective, and whether it can be applied to other LLMs remains to be seen.

Aside from government, other political actors are keen to use AI. Generative models are capable of providing key insights into the fate of legislative amendments, which is of great appeal to interest groups aiming to secure their desired outcomes (using AI generated strategies like ‘undetectable text alterations’ or ‘impact assessments’). Again, access to better performing generative AI models will further entrench inequalities in the legislative playing field, as it is likely to be most beneficial for the larger and profit-seeking lobby groups than grassroot organisations.
 
The sudden commotion around generative AI inevitably raises questions on whether this current boom will resemble the dotcom bubble. It is difficult to tell whether things will continue to heat up or whether they will stabilise when stakeholders become more conscious of LLM limitations and the costs of feeding new data. It is now the responsibility of academics and policymakers to anticipate and plan for how AI mechanisms will mould the information age.

Further articles


UNESCO’s Internet for Trust meeting


Largely going under the radar, UNESCO organised an ‘Internet for Trust’ meeting of tech policy experts at the end of February in Paris to discuss the possibility of an international regulatory institution. A notable output from the meeting was the publication of Guidelines for Regulating Digital Platforms, which recommends the foundation of an independent regulatory body over how digital platforms conduct content moderation. 

Tech Policy Press, a non-profit ‘tech and democracy’ think tank, is not optimistic about such an institution being created anytime soon (at least at the scale being posited). But this does not undermine the intent of stakeholders worldwide who are willing to join efforts in holding Big Tech accountable for its decisions. The UN has additionally launched a Global Digital Compact initiative, expected to be agreed in September 2024, on shared principles of an open and secure digital future for all. Granted, this all sounds very wishy-washy and it is easy to criticise. But getting governments, the private sector, civil society and academia to align on internet safety is no easy task, and getting them to meet and talk about it is a start.

Further articles



Updating content moderation in Europe


Speaking of the EU, a wave of content moderation changes is about to take place as a result of the DSA (Digital Services Act) and DMA (Digital Markets Act). Google, Meta, and Wikipedia are all updating their platforms according to new safety and transparency laws. Within the next 6 months, regulation will expect these providers to report the risk of illegal content or election manipulation on their platforms, among other requirements, and they will be audited for content moderation. Even Twitter will have to abide by the rules, against Elon Musk’s original plan to protect ‘free speech’ on the site.
 
But there is not much to rejoice about yet. Although new content moderation is being introduced, it is not quite clear how moderation algorithms are being modified. And moderation should not be the only regulatory priority for Big Tech. The further articles below just scratch the surface of some of their worrisome behaviour.

Further articles


Other news, books and stories we are reading

Short form

  • 2022 had the highest number of internet shutdowns Access Now
  • India tops the list MIT Tech Review
  • Google announces launch of new Ads Transparency Center Google

Focus on TikTok

  • The TikTok API is a minefield for researchers Tech Policy Press
  • China is pressuring TikTok (domestic version ‘Douyin’) to keep children and teens off the app MIT Tech Review
  • British officials have fined TikTok £13 million for violating rules protecting the personal data of children under 13 (The Guardian) and the UK government is reportedly mulling over banning the app on staff devices (The Guardian)
  • The US government attempt to ban TikTok is vague and confusing Ars Technica

Books & Articles

  • Bakir & McStay (2022) Addressing false information online via provision of authoritative information: Why dialling down emotion is part of the answer. Submission to DCMS Online Harms and Disinformation SubCommittee Inquiry 
    “We conclude that rather than having to make difficult content moderation decisions about what is true and false on the fly and at scale, it may be better to ensure that digital platforms’ algorithms optimise emotions for social good rather than just for the platform and its advertisers’ profit”
  • Bakir & McStay (2023) Optimising Emotions, Incubating Falsehood: How to Protect the Global Civic Body from Disinformation and Misinformation (Palgrave Macmillan, Springer). [OPEN ACCESS] 
    the book considers near horizon scenarios that exploit the automated industrial psycho-physiological profiling of the civic body to understand affect and infer emotion for the purposes of changing behaviour”
  • Measuring trends in AI: Stanford University 2023 AI Index Report
  • Understanding social media recommendation algorithms: Towards a better informed debate on the effects of social media, Knight First Amendment Institute
  • Dobber et al. (2023) Shielding citizens? Understanding the impact of political advertisement transparency information, New Media & Society
  • Amsalem & Zoizner (2022) Do people learn about politics on social media? A meta-analysis of 76 studies, Journal of Communication (apparently not)
  • Wiggins & Lones (2023) How data happened: A history from the Age of Reason to the Age of Algorithm WW Norton

Jobs & Opportunities

Events

  • The Future of Constitutionalism: The Digital Constitutionalist (School), University of Florence, 16th April
  • Copyright, Text Mining & AI Training (online), AI + Society Initiative, University of Ottawa, 18th April
  • Symposium: Algorithmic Amplification and Society (online), Knight Institute, Columbia University, 28-29th April
  • Oxford Media Policy Summer Institute, University of Oxford, 30th April
  • Political Technology Residency programme, Newspeak House
  • Symposium: Political Agency within and of Platform Societies. Power and Resistance in the Digital Age, Royal Holloway University of London, 17th May

Call for Papers

  • PSA Early Career Network Annual conference Political Worlds, Deadline 14th April 
  • Routledge Handbook of Social Media, Law and Society Open University
  • ACM Conference on Equity and Access in Algorithms, Mechanisms and Optimisation (EAAMO), Deadline 10th May
  • Special Issue on Assessing Sentience of AI Systems Journal of Social Computing, Deadline 1st July

Policy/non-academic

Academia


Want to share your own updates, research, jobs, events, publications, and more with the network? Simply get in contact with us at tip@psa.ac.uk, or tag us on Twitter @PSA_tip (You can follow us while you’re there too)

Add a Comment

Your email address will not be published.