Think
Artificial Intelligence
Is AI is a technology of extraction?
Think
“History doesn't repeat itself but sometimes it rhymes” . . well ain’t that the truth. . As much as I like technology personally, Kate Crawford’s argument that “AI is a technology of extraction” really resonates with me. I looked up her book Atlas of AI. and am keen to read more of her work : “AI systems are not objective or neutral tools that make determinations without human input. They are created by a narrow demographic of engineers and often trained on data that comes from skewed sources, which is then labeled and classified in ways that are fundamentally political. These systems are then applied in complex social contexts with histories of structural inequality, such as policing, the court system, healthcare, and education. The inequities are not just in how AI systems “see” the world but in the ways they are deployed in service to the most powerful interests*. AI systems are best seen as expressions of power, created to increase profits and centralize control for those who wield them.”
and the disconnect between people and the means of production . . . also makes me think of Google ScanOps teams . . . After filmmaker Andrew Wilson spent six months at Google, he noticed something interesting about the way the company treats its employees. In a 2012 article Wilson explains. Google’s colored badge system: (worth a read)
Thousands of people with red badges (such as me, my team, and most other contractors) worked amongst thousands of people with white badges (as Full-time Googlers). Interns are given green badges. However, a fourth class exists at Google that involves strictly data-entry labor, or more appropriately, the labor of digitizing. These workers are identifiable by their yellow badges, and they go by the team name ScanOps. They scan books, page by page, for Google Book Search. The workers wearing yellow badges are not allowed any of the privileges that I was allowed – ride the Google bikes, take the Google luxury limo shuttles home, eat free gourmet Google meals, attend Authors@Google talks and receive free, signed copies of the author’s books, or set foot anywhere else on campus except for the building they work in. They also are not given backpacks, mobile devices, thumb drives, or any chance for social interaction with any other Google employees. Most Google employees don’t know about the yellow badge class.
The exponential growth of AI, the network effect . . the proliferation of data extraction from Linkedin to Facebook to my local grocery chain . . and the lack of conversation on a day to day level as what we perceived as ‘normal’ or ’acceptable’ is pretty concerning. I think On the war front, I heard a lecture by Michael Richardson(expert researcher / academic on surveillance, militarisation of culture etc) few months ago . . .and he talked about his major concerns:
The sheer volume of decisions being made by military AI like Lavender, Gospel and Where’s Daddy - means more targets being identified on a daily basis, more bombs being dropped - an exponential increase in the scale of warfare - which is what we’re seeing from Israel right now who are using these technologies
LLM where the “Target” and “Value” of human collateral is being calculated by AI, and despite claims that there will always be humans in the loop. his research shows that in many cases this is actually not the case - much of the decision making is automated. He makes two great points about this in relation to our ideas about how we think as humans we'd respond or utilise automated algorithmic decisions. He says we'd like to think we’d always insist on “humans in the loop” but his research shows that these automated recommendations play to our human tendencies to:
Prefer ACTION over INACTION
TRUST the TECH
Pretty concerning don’t you think?
“History doesn't repeat itself but sometimes it rhymes” . . well ain’t that the truth. . As much as I like technology personally, Kate Crawford’s argument that “AI is a technology of extraction” really resonates with me. I looked up her book Atlas of AI. and am keen to read more of her work : “AI systems are not objective or neutral tools that make determinations without human input. They are created by a narrow demographic of engineers and often trained on data that comes from skewed sources, which is then labeled and classified in ways that are fundamentally political. These systems are then applied in complex social contexts with histories of structural inequality, such as policing, the court system, healthcare, and education. The inequities are not just in how AI systems “see” the world but in the ways they are deployed in service to the most powerful interests*. AI systems are best seen as expressions of power, created to increase profits and centralize control for those who wield them.”
and the disconnect between people and the means of production . . . also makes me think of Google ScanOps teams . . . After filmmaker Andrew Wilson spent six months at Google, he noticed something interesting about the way the company treats its employees. In a 2012 article Wilson explains. Google’s colored badge system: (worth a read)
Thousands of people with red badges (such as me, my team, and most other contractors) worked amongst thousands of people with white badges (as Full-time Googlers). Interns are given green badges. However, a fourth class exists at Google that involves strictly data-entry labor, or more appropriately, the labor of digitizing. These workers are identifiable by their yellow badges, and they go by the team name ScanOps. They scan books, page by page, for Google Book Search. The workers wearing yellow badges are not allowed any of the privileges that I was allowed – ride the Google bikes, take the Google luxury limo shuttles home, eat free gourmet Google meals, attend Authors@Google talks and receive free, signed copies of the author’s books, or set foot anywhere else on campus except for the building they work in. They also are not given backpacks, mobile devices, thumb drives, or any chance for social interaction with any other Google employees. Most Google employees don’t know about the yellow badge class.
The exponential growth of AI, the network effect . . the proliferation of data extraction from Linkedin to Facebook to my local grocery chain . . and the lack of conversation on a day to day level as what we perceived as ‘normal’ or ’acceptable’ is pretty concerning. I think On the war front, I heard a lecture by Michael Richardson(expert researcher / academic on surveillance, militarisation of culture etc) few months ago . . .and he talked about his major concerns:
The sheer volume of decisions being made by military AI like Lavender, Gospel and Where’s Daddy - means more targets being identified on a daily basis, more bombs being dropped - an exponential increase in the scale of warfare - which is what we’re seeing from Israel right now who are using these technologies
LLM where the “Target” and “Value” of human collateral is being calculated by AI, and despite claims that there will always be humans in the loop. his research shows that in many cases this is actually not the case - much of the decision making is automated. He makes two great points about this in relation to our ideas about how we think as humans we'd respond or utilise automated algorithmic decisions. He says we'd like to think we’d always insist on “humans in the loop” but his research shows that these automated recommendations play to our human tendencies to:
Prefer ACTION over INACTION
TRUST the TECH
Pretty concerning don’t you think?
“History doesn't repeat itself but sometimes it rhymes” . . well ain’t that the truth. . As much as I like technology personally, Kate Crawford’s argument that “AI is a technology of extraction” really resonates with me. I looked up her book Atlas of AI. and am keen to read more of her work : “AI systems are not objective or neutral tools that make determinations without human input. They are created by a narrow demographic of engineers and often trained on data that comes from skewed sources, which is then labeled and classified in ways that are fundamentally political. These systems are then applied in complex social contexts with histories of structural inequality, such as policing, the court system, healthcare, and education. The inequities are not just in how AI systems “see” the world but in the ways they are deployed in service to the most powerful interests*. AI systems are best seen as expressions of power, created to increase profits and centralize control for those who wield them.”
and the disconnect between people and the means of production . . . also makes me think of Google ScanOps teams . . . After filmmaker Andrew Wilson spent six months at Google, he noticed something interesting about the way the company treats its employees. In a 2012 article Wilson explains. Google’s colored badge system: (worth a read)
Thousands of people with red badges (such as me, my team, and most other contractors) worked amongst thousands of people with white badges (as Full-time Googlers). Interns are given green badges. However, a fourth class exists at Google that involves strictly data-entry labor, or more appropriately, the labor of digitizing. These workers are identifiable by their yellow badges, and they go by the team name ScanOps. They scan books, page by page, for Google Book Search. The workers wearing yellow badges are not allowed any of the privileges that I was allowed – ride the Google bikes, take the Google luxury limo shuttles home, eat free gourmet Google meals, attend Authors@Google talks and receive free, signed copies of the author’s books, or set foot anywhere else on campus except for the building they work in. They also are not given backpacks, mobile devices, thumb drives, or any chance for social interaction with any other Google employees. Most Google employees don’t know about the yellow badge class.
The exponential growth of AI, the network effect . . the proliferation of data extraction from Linkedin to Facebook to my local grocery chain . . and the lack of conversation on a day to day level as what we perceived as ‘normal’ or ’acceptable’ is pretty concerning. I think On the war front, I heard a lecture by Michael Richardson(expert researcher / academic on surveillance, militarisation of culture etc) few months ago . . .and he talked about his major concerns:
The sheer volume of decisions being made by military AI like Lavender, Gospel and Where’s Daddy - means more targets being identified on a daily basis, more bombs being dropped - an exponential increase in the scale of warfare - which is what we’re seeing from Israel right now who are using these technologies
LLM where the “Target” and “Value” of human collateral is being calculated by AI, and despite claims that there will always be humans in the loop. his research shows that in many cases this is actually not the case - much of the decision making is automated. He makes two great points about this in relation to our ideas about how we think as humans we'd respond or utilise automated algorithmic decisions. He says we'd like to think we’d always insist on “humans in the loop” but his research shows that these automated recommendations play to our human tendencies to:
Prefer ACTION over INACTION
TRUST the TECH
Pretty concerning don’t you think?
“History doesn't repeat itself but sometimes it rhymes” . . well ain’t that the truth. . As much as I like technology personally, Kate Crawford’s argument that “AI is a technology of extraction” really resonates with me. I looked up her book Atlas of AI. and am keen to read more of her work : “AI systems are not objective or neutral tools that make determinations without human input. They are created by a narrow demographic of engineers and often trained on data that comes from skewed sources, which is then labeled and classified in ways that are fundamentally political. These systems are then applied in complex social contexts with histories of structural inequality, such as policing, the court system, healthcare, and education. The inequities are not just in how AI systems “see” the world but in the ways they are deployed in service to the most powerful interests*. AI systems are best seen as expressions of power, created to increase profits and centralize control for those who wield them.”
and the disconnect between people and the means of production . . . also makes me think of Google ScanOps teams . . . After filmmaker Andrew Wilson spent six months at Google, he noticed something interesting about the way the company treats its employees. In a 2012 article Wilson explains. Google’s colored badge system: (worth a read)
Thousands of people with red badges (such as me, my team, and most other contractors) worked amongst thousands of people with white badges (as Full-time Googlers). Interns are given green badges. However, a fourth class exists at Google that involves strictly data-entry labor, or more appropriately, the labor of digitizing. These workers are identifiable by their yellow badges, and they go by the team name ScanOps. They scan books, page by page, for Google Book Search. The workers wearing yellow badges are not allowed any of the privileges that I was allowed – ride the Google bikes, take the Google luxury limo shuttles home, eat free gourmet Google meals, attend Authors@Google talks and receive free, signed copies of the author’s books, or set foot anywhere else on campus except for the building they work in. They also are not given backpacks, mobile devices, thumb drives, or any chance for social interaction with any other Google employees. Most Google employees don’t know about the yellow badge class.
The exponential growth of AI, the network effect . . the proliferation of data extraction from Linkedin to Facebook to my local grocery chain . . and the lack of conversation on a day to day level as what we perceived as ‘normal’ or ’acceptable’ is pretty concerning. I think On the war front, I heard a lecture by Michael Richardson(expert researcher / academic on surveillance, militarisation of culture etc) few months ago . . .and he talked about his major concerns:
The sheer volume of decisions being made by military AI like Lavender, Gospel and Where’s Daddy - means more targets being identified on a daily basis, more bombs being dropped - an exponential increase in the scale of warfare - which is what we’re seeing from Israel right now who are using these technologies
LLM where the “Target” and “Value” of human collateral is being calculated by AI, and despite claims that there will always be humans in the loop. his research shows that in many cases this is actually not the case - much of the decision making is automated. He makes two great points about this in relation to our ideas about how we think as humans we'd respond or utilise automated algorithmic decisions. He says we'd like to think we’d always insist on “humans in the loop” but his research shows that these automated recommendations play to our human tendencies to:
Prefer ACTION over INACTION
TRUST the TECH
Pretty concerning don’t you think?
“History doesn't repeat itself but sometimes it rhymes” . . well ain’t that the truth. . As much as I like technology personally, Kate Crawford’s argument that “AI is a technology of extraction” really resonates with me. I looked up her book Atlas of AI. and am keen to read more of her work : “AI systems are not objective or neutral tools that make determinations without human input. They are created by a narrow demographic of engineers and often trained on data that comes from skewed sources, which is then labeled and classified in ways that are fundamentally political. These systems are then applied in complex social contexts with histories of structural inequality, such as policing, the court system, healthcare, and education. The inequities are not just in how AI systems “see” the world but in the ways they are deployed in service to the most powerful interests*. AI systems are best seen as expressions of power, created to increase profits and centralize control for those who wield them.”
and the disconnect between people and the means of production . . . also makes me think of Google ScanOps teams . . . After filmmaker Andrew Wilson spent six months at Google, he noticed something interesting about the way the company treats its employees. In a 2012 article Wilson explains. Google’s colored badge system: (worth a read)
Thousands of people with red badges (such as me, my team, and most other contractors) worked amongst thousands of people with white badges (as Full-time Googlers). Interns are given green badges. However, a fourth class exists at Google that involves strictly data-entry labor, or more appropriately, the labor of digitizing. These workers are identifiable by their yellow badges, and they go by the team name ScanOps. They scan books, page by page, for Google Book Search. The workers wearing yellow badges are not allowed any of the privileges that I was allowed – ride the Google bikes, take the Google luxury limo shuttles home, eat free gourmet Google meals, attend Authors@Google talks and receive free, signed copies of the author’s books, or set foot anywhere else on campus except for the building they work in. They also are not given backpacks, mobile devices, thumb drives, or any chance for social interaction with any other Google employees. Most Google employees don’t know about the yellow badge class.
The exponential growth of AI, the network effect . . the proliferation of data extraction from Linkedin to Facebook to my local grocery chain . . and the lack of conversation on a day to day level as what we perceived as ‘normal’ or ’acceptable’ is pretty concerning. I think On the war front, I heard a lecture by Michael Richardson(expert researcher / academic on surveillance, militarisation of culture etc) few months ago . . .and he talked about his major concerns:
The sheer volume of decisions being made by military AI like Lavender, Gospel and Where’s Daddy - means more targets being identified on a daily basis, more bombs being dropped - an exponential increase in the scale of warfare - which is what we’re seeing from Israel right now who are using these technologies
LLM where the “Target” and “Value” of human collateral is being calculated by AI, and despite claims that there will always be humans in the loop. his research shows that in many cases this is actually not the case - much of the decision making is automated. He makes two great points about this in relation to our ideas about how we think as humans we'd respond or utilise automated algorithmic decisions. He says we'd like to think we’d always insist on “humans in the loop” but his research shows that these automated recommendations play to our human tendencies to:
Prefer ACTION over INACTION
TRUST the TECH
Pretty concerning don’t you think?
Other Blog Posts
Research
March 5, 24
How will declining birthrates and ageing populations shape our potential futures?
Research
May 2, 2019
Contemplating the right question is often more important than crafting the right answer.
Education
May 1, 2020
As a school student, how might you think about Earlywork as an opportunity to showcase who you are and what you’re capable of?
Story
March 6, 2023
A Science Fiction Prototyping approach to imagining our future oceans.
Research
June 5, 2023
Inputting research and information as networked knowledge nodes with supertags surfaces connections and patterns you might not otherwise pick up.
Design
September 10, 2023
Is AI Stripping Creativity from Architecture? The Dangers of Algorithm-Driven Design
Education
September 24, 2023
What if . . we framed education as an example of chaos theory?
Education
October 17, 2023
Will school education will eventually reform as an emergent system with technology embedded as a key shaping force?
Education
April 18, 2024
What if . . instead of the failures of students, we focused on the failure of systems?. . and used that understanding to collectively reimagine education?
Research
August 20, 2024
UX (User eXperience) is in some ways, a philosophical enquiry, inviting us to question - What is this really about? What is the role of design in this moment?
Books
September 19, 2024
If we disallow radical flank activism does the right to protest mean anything at all?
Think
September 21, 2024
Thinking about Foucault in the context of women worldwide and reproductive justice
Books
November 13, 2024
When we think about development, can the concept of 'equity' be reconceptualised to connect with the context of the Global South without recovering / making visible, plural definitions of 'prosperity' or 'wellbeing'?
👋 say hello