AI researchers urge tech go beyond scale to address systemic social issues


For a long time, the definition of success for startups and Big Tech companies alike has been synonymous with three words: hockey stick growth. To scale is to be capable of speedy growth of users and revenue, but AI researchers say companies interested in purpose beyond profit need to consider approaches beyond rapid growth. That’s according to a paper published in recent days by Google senior research scientist Alex Hanna and independent researcher Tina Park.

The paper argues that scale thinking is not just a way to how to grow a business, but a method that impacts all parts of a business, actively inhibits participation in tech and society, and “ forces particular types of participation to operate as extractive or exploitative labor.”

“Whether people are aware of it or not, scale thinking is all-encompassing. It is not just an attribute of one’s product, service, or company, but frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems,” the paper reads.

The paper continues to say that companies rooted in scale thinking are unlikely to be as “effective at deep, systemic change as their purveyors imagine. Rather, solutions which resist scale thinking are necessary to undo the social structures which lie at the heart of social inequality.”

An approach that rejects scale as essential runs counter to what is today central dogma for Big Tech companies like Facebook and Google, and how media and analysts often assess the value of emerging startups.

An antitrust investigation by Congress released earlier this month cites scale as part of the formula for the anticompetitive practices of Big Tech companies who maintain and strengthen monopolies across the digital economy. A Department of Justice lawsuit filed Tuesday against Google, the first against a major tech company in two decades, also calls scale achieved through algorithms and collection of personal user data a big part of why the government is suing the Alphabet company.

Scale evangelists include Y Combinator cofounder Paul Graham, and AWS CTO Werner Vogels,  and former Google CEO Eric Schmidt, who is quoted in the DOJ lawsuit as saying “Scale is the key” to Google’s strength in search.

Embedded in scale thinking, Hanna and Park argue, is the idea that scalability is morally good, and that solutions that do not scale are morally abject. Authors say that’s part of why Big Tech companies place a high value on artificial intelligence.

“Large tech firms spend much of their time hiring developers who can envision solutions which can be implemented algorithmically. Code and algorithms which scale poorly are seen as undesirable and inefficient. Many of the most groundbreaking infrastructural developments in big tech have been those which increase scalability, such as Google File System (and subsequently the MapReduce computing schema) and distributed and federated machine learning models,” the paper reads.

Scale thinking is also shortsighted because it requires thinking of resources and people as interchangeable units, and encourages datafication of users in order to “find ways to rationalize the individual into legible data points.” This approach can reveal when systems are not made to serve everyone, and can have a negative impact on the lives of people who fall outside of the universality of scaled solutions.

Hanna and Park also call scale thinking an ineffective way to increase hiring or retention of employees from diverse backgrounds at Big Tech companies. Since the deaths of Black people like Breonna Taylor and George Floyd led to calls for racial justice earlier this year, a number of major tech companies have recommitted to diversity goals, but for years now progress has been virtually undetectable. Examples offered in the paper include a focus on the number of bias workshops or certain inclusion metrics rather than the experiences of marginalized people within a company.

Instead of scale thinking, authors suggest alternative approaches like mutual aid, which requires people follow an interdependent approach and take responsibility for meeting the direct material needs of individuals while rejecting scale or categorization of people as a North Star. Inspiration for mutual aid as an alternative came in part from the kinds of support systems that sprung up after the start of the COVID-19 global pandemic.

“While scale thinking emphasizes abstraction and modularity, mutual aid networks encourage concretization and connection,” reads the paper. “While mutual aid is not the only framework through which we can consider a move away from scale thinking-based collaborative work arrangements, we find it to be a fruitful one to theorize and pursue.”

In addition to examination of mutual aid, the paper encourages developers to consider certain questions about the systems they create, such as whether it legitimizes or expands social systems people are trying to dismantle, whether a system encourages participation, and whether a system centralizes power or distributes power among developers and users.

Recommendations contained in the paper are in line with a range of ethically-centered alternative ways to build technology and AI proposed by the fairness and accountability portion of the AI community in recent months. Others include the idea of anticolonial AI which rejects algorithmic oppression and data colonization, queering machine learning, data feminism, and building AI based on the African philosophy of Ubuntu, which focuses on the interconnectedness of people and the natural world.

There’s also “Good Intentions, Bad Inventions”, a Data & Society primer published earlier this month that attempts to dispel common myths about healthy ways to build technology and what developers can do to improve user well-being.

Titled “Against Scale: Provocations and Resistances to Scale Thinking” the paper was highlighted this week at a Computer-Supported Cooperative Work and Social Computing (CSCW) conference workshop. Before writing critically about scale, Hanna and colleagues at Google published a paper in late 2019 that argues that the algorithmic fairness community should look to critical race theory as a way to interrogate AI systems and how they impact human lives.


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here




Source link

Why Is Everyone Building an Electric Pickup Truck? Previous post Why Is Everyone Building an Electric Pickup Truck?
Verge readers can get an exclusive discount on Jackbox Party Pack 7 Next post Verge readers can get an exclusive discount on Jackbox Party Pack 7