Reactions to Frontline’s documentary “In the Age of AI”

Image source: https://i1.wp.com/www.pbs.org/wgbh/frontline/wp-content/uploads/2019/10/FL_InTheAgeofAI_SignatureImage.jpg?resize=1200%2C630

This discussion is based off the PBS Frontline documentary, “In the Age of AI.” To watch this documentary, visit this link:

PBS Frontline: In the Age of AI

Questions for Discussion

  1. When Kai-Fu Lee says, “Data is the new oil,” and “China is the Saudi Arabia of data,” what does he mean in those statements? Can you describe at least three ways that companies or governments with data monopolies could benefit from having those resources over other companies or governments?
  2. From your perspective as a user or non-user of social media or search indexing technology, do you think social media, searching or website (cookies) tracking of US citizens is important in the race to become a leader in artificial intelligence for the United States, or is that tracking/collection an invasion of privacy and individual rights in our country? What about in other countries?
  3. Can you think of a few careers you might consider pursuing when you graduate from college or graduate school? Once you have a few ideas, visit https://willrobotstakemyjob.com/ and plug in those career ideas to see their overall risk to being replaced by automation. How vulnerable are those potential careers to automation? Can you think of 2-3 careers that existed in 1950 that do not exist today? How about 1990? How about 2020? List all the jobs from those years/decades that have become redundant and replaced by automation.
  4. We live in a remarkable time where nation-states are vying to become the leaders of AI technologies. What challenges do you foresee your generation will have to face when it comes to nation-states look to becoming AI leaders and potentially use that technology against other nations? Who do you see as the key players in that race? In your opinion, how will that shape diplomatic economic relationships between the United States, the European Union, the Russian federation states and China?
  5. How will services like ChatGPT, QuillBot, etc. change the landscape of learning a fundamental skill like how to write essays? What about writing code or undercutting the need to learn computer science? What other essential tangents of learning could be replaced by AI? Of those tangents, is it a good thing that humans would/would not need to learn that skill? Why? Explain your answers.

AS A REMINDER, please cite the URL of whatever sources you use to answer these questions.

This entry was posted in artificial intelligence, ethics, machine learning. Bookmark the permalink.

4 Responses to Reactions to Frontline’s documentary “In the Age of AI”

  1. Gil Mebane says:

    Data has become the most important resource known to humankind. What’s more, China has the most data (most people) and the most capable ways of mining this data (just like how Saudi Arabia has historically had the most oil and mines the most oil).
    Three ways companies could benefit:
    Knowing the ‘user/individual’ better allows for more targeted advertising (good for companies)
    Knowing this data allows for better predictive behavior (this can benefit government defense also)
    Knowing this data, on a broader note, allows for better market predictions which may give both companies and governments a financial advantage

    While it is certainly an invasion of privacy (hence the new common consensus that we live in an era of post-privacy), it is also very important that the US gathers this data. For one, it can be used for homeland and foreign security via predictive behavior but also if the US does not have the data needed to continually automate in this new era we will not have control over which things are to be automated (i.e., other countries will literally present to us which jobs will be done by whom). Finally, this data is essential to all countries for much the same reason: data has become an arms race of sorts and the country that ‘wins’ this arms race will surely decide both social and political norms of the near future.

    Few careers:
    Electric/Computer Science Engineer
    Risk level: 28%
    Mechanical Engineer
    Risk level: 21%
    Note: in either case, I intend to work in some field of design for a large tech or defense company.
    Replaced careers:
    1950
    Switch Board Operator
    Elevator attendant
    Linotype Operator
    1990
    Many assembly line workers have been put out of work
    Packagers
    I.e, at companies like Amazon packages are being filled and sealed by automated robots rather than people
    2020
    Many Photoshop artists were replaced by the new integrated AI update
    Taxi drivers have disappeared in many cities due to Uber
    Also replaced due to some driverless car programs in California and China

    The main problems I see would be both physical attacks (i.e., things like automated drones which already exist) and online attacks (such as recent attacks on hospital and oil line systems by foreign nations/organizations). Moreover, in terms of cybersecurity, AI is far better at breaching security measures than humans hence cybersecurity will increasingly become an issue.
    The key players in this race are undoubtedly the US and China however Japan, Taiwan, and Korea seem to have the potential and infrastructure to establish themselves as a similar AI superpower in the coming years.
    At the moment it seems that China and Russia will most likely ally in the race for AI and the United States and the European Union will also become similar allies. This seems to once again be capitalist/democratic countries vs. the totalitarian state (much like the Cold War). This is also why this new era has been referred to as the “New Cold War” by many sources.
    For the most part, I believe that these services will manifest themselves as an aid to our fundamental skills (such as writing basic code, aiding with outlines, etc…) however they will by no means replace the creative aspect required for essays and similar tasks. Furthermore, code itself is also somewhat of an art form. That being said ChatGPT may be able to write its own code however new code and algorithms still need to be thought up by humans for the time being. For this reason, I would like to think that ChatGPT can be used to automate the redundant parts of code however humans, for the time being, will still remain in charge of the more complex and unique forms of code (AI works by examining past data and to my understanding this means it cannot truly yet ‘think outside the box’).
    Once again I don’t think any essential tangents of learning will be replaced by AI as we will still need to learn how to do things for ourselves however, once these things are learned, people will surely begin to really on AI for rudimentary tasks. One thing that comes to mind is reading. At least for me, reading is something that takes me an exceeding amount of time however it is still an important skill to know for yourself. Therefore, using AI reading tools to speed up the process of reading can help automate this process however it hasn’t and won’t fully replace the need to learn to read.
    Just for the sake of clarity, and to answer the final part of the question in full, I would say that humans still need to learn almost all skills. That being said, once these skills are learned, there is no harm in using AI to automate the process of work (think about how in math we still learn addition yet we have a calculator that can do it for us).

    • Anand Jayashankar says:

      For Q1:
      These are pretty much what I said as well. I just realized, though, that AI and Oil isn’t really a fair parallel because the Saudis were just lucky to have the most oil in the world; they didn’t “work” or do anything special for it, whereas the Chinese government has probably spent thousands of hours and employed millions of people to advance their AI.
      For Q2:
      Yep, this is exactly what I thought too. In order for the US to compete in the AI arms race, its citizens have to be willing to give up this data or else the Deep Learning algorithms won’t be able to compete with China’s.
      Hmmmm, I didn’t even consider this. I guess that’s true though; if an authoritarian government (like china) is what it takes to get ahead in the AI arms race, I wonder if we’ll see any countries move to that form of government in the near future
      For Q4:
      Wow, I completely forgot about cybersecurity on my response. This is a great point
      For Q5:
      Great parallel. Personally, I don’t think it will end up like the Cold War; hopefully, the countries can reach a point of understanding as to how impactful AI will be in deciding the future of generations to come, and agree upon some mutual “usage rate” or implementation of machine learning and AI. Maybe I’m too optimistic, though, it is politics after all
      I don’t know if that’s necessarily true, though. The AlphaGo algorithm in the documentary played a move that had never been recorded before.
      I love this comparison (In reference to having a calculator but still learning how to do all the operations the calculator can do by hand first)

  2. Anand Jayashankar says:

    When Kai-Fu Lee says, “Data is the new oil,” and “China is the Saudi Arabia of data,” what does he mean in those statements? Can you describe at least three ways that companies or governments with data monopolies could benefit from having those resources over other companies or governments?
    When Kai-Fu Lee says “Data is the new oil,” and “China is the Saudi Arabia of data,” he means that data is going to be the most valuable resource going forward, and the country which will have the most of this resource is going to be China. Companies and governments with data monopolies could benefit over other companies and governments by being able to have most targeted and personalized ads which should lead to more ad revenue, crime prevention algorithms which could potentially weed out “high risk” people before they commit a crime, and also more ability to control what political issues and news people are exposed to by being able to make more personalized “for you” pages in digital news.
    From your perspective as a user or non-user of social media or search indexing technology, do you think social media, searching or website (cookies) tracking of US citizens is important in the race to become a leader in artificial intelligence for the United States, or is that tracking/collection an invasion of privacy and individual rights in our country? What about in other countries
    As a user of social media, I do think cookies are important in the race for the US to be a leader in artificial intelligence, but I also acknowledge that it’s an invasion of privacy. In order for Deep Learning algorithms to function at their highest potential, the dataset which the algorithm is trained off of has to be as large as possible. As a result, the more tracking and “cookies” there are, the bigger the datasets will be, which should help these algorithms have better predictive results. However, I don’t think I feel very safe knowing that Google or Instagram is keeping tabs of where I live, what time I use their services, what my favorite sports teams are, what my political opinions are, etc. Even operating under the pretense that this data won’t be “sold” to some random billionaire somewhere (like with Facebook), it’s hovering over the line of invasion of privacy. Knowing that this data can and is sold to these people and companies across the globe certainly makes this tracking an invasion of privacy in my opinion.
    Can you think of a few careers you might consider pursuing when you graduate from college or graduate school? Once you have a few ideas, visit https://willrobotstakemyjob.com/ and plug in those career ideas to see their overall risk to being replaced by automation. How vulnerable are those potential careers to automation? Can you think of 2-3 careers that existed in 1950 that do not exist today? How about 1990? How about 2020? List all the jobs from those years/decades that have become redundant and replaced by automation.
    Operations Research Analyst—33% risk level.
    Biostatistician—27% risk level
    Business Intelligence Analyst— 69% risk level.
    A couple careers from the 50s that don’t exist today are elevator operators and Switchboard operators
    A couple careers from the 90s that don’t exist anymore are video rental clerk (Blockbuster) and certain assembly line positions which are automated now
    A couple careers from 2020 that are dropping in numbers are cashiers and graphic designers

    We live in a remarkable time where nation-states are vying to become the leaders of AI technologies. What challenges do you foresee your generation will have to face when it comes to nation-states look to becoming AI leaders and potentially use that technology against other nations? Who do you see as the key players in that race? In your opinion, how will that shape diplomatic economic relationships between the United States, the European Union, the Russian federation states and China?
    I foresee our generations having a lot of challenges when it comes to the U.S. becoming an AI leader. Due to the open discourse and freedom of speech in our country, and given that AI is an extremely controversial and opinionated topic, there’s going to be a lot of disagreement among citizens about where to draw the figurative line for AI. Whereas in China the government can basically do whatever they want, and if anybody voices an opinion contrary to that they will “disappear” for a while. I can foresee countries using their AI algorithms to see what “weakness” other countries’ populations have, and then marketing/targeting in a way to exploit those weaknesses ( maybe somewhat similar to what Britain did to China during Opium Wars). I see the key players in the race being China, US, and India. I don’t think the outlook will be as pessimistic as the documentary thought—essentially a second cold war—but I do think the relations between these countries will be very competitive as they will all be trying to win the AI arms race. In the end, I think that China will come out on top of the AI arms race, ahead of the US, the EU, Russia, and India.
    How will services like ChatGPT, QuillBot, etc. change the landscape of learning a fundamental skill like how to write essays? What about writing code or undercutting the need to learn computer science? What other essential tangents of learning could be replaced by AI? Of those tangents, is it a good thing that humans would/would not need to learn that skill? Why? Explain your answers.
    ChatGPT and similar services, in my opinion, will be able to replace purely “analytical” essays seamlessly. These are essays in which all the student is required to do is break down a novel, talk about a theme, incorporate quotes, etc. These services will essentially make the skill of doing these analytical essays obsolete. However, I don’t think they will be able to replace writing which is meant to elicit emotion from the reader (at least yet). As we saw in AP Comp Sci last year, ChatGPT is able to write code effectively. I do think that at some point in time “basic” coding will be completely replaced by regenerative AI, so that skill may become obsolete. However, the skill to train Deep Learning algorithms will still be very important in the future. Another “skill” which will eventually be replaced by AI is the skill of driving. In my opinion, this is a good thing. Humans aren’t perfect drivers, and even the ones who are very skilled drivers are bound to get distracted/bored/tired at some point in time, endangering themselves and also others. This won’t happen with self-driving cars. Once all the cars on the road are self-driving cars, there won’t be anymore motor vehicle accidents. However, as was discussed in the documentary, many jobs will be lost to this. Uber drivers, Taxi drivers, truck drivers, etc. will all be out of business if all motor vehicles become self-driving. Another “skill” which may be replaced is the skill of data visualization. Similar to the previous example, it will be good in the sense that AI algorithms will most likely be able to create graphs and charts that effectively communicate a message quicker and more efficiently than their human counterparts, but once again this will result in the loss of tons of jobs.

    • Gil Mebane says:

      1. This seems to be very similar to what I said. Nevertheless, I am curious about why exactly the government would attempt to personalize what political issues and news people are exposed to. Did you mean this to be strictly for companies in the sense that more left and right-wing news sources would be more easily able to reach their audiences?

      2. We both pretty much said the same thing here once again (at least initially) however I like that you expanded upon how Depp Learning algorithms best function. On another note, I might propose the question to you: would you be more open to the use of cookies and social media tracking if it was government-run? At least to me, it seems as though your main objection stems from the private (i.e. business) side of the industry and not from the governmental side (hence my prior question).

      3. Going to be perfectly honest I had to Google all three of these jobs however I commend you for having already narrowed your job search so early in life (I literally had to just list fields as I couldn’t think of a job to list).

      Perhaps the risk in this field (#3) comes from the fact that the primary goal is the analysis of data for the purpose of aiding business discussions.

      Nice example, didn’t even think about how sources like Netflix work using AI to analyze your preferences vs. how Blockbuster did so using ‘human power’ to choose which movies to stock.

      4. Opium war = good connection

      Could you foresee this leading to other countries banding together in an effort to ‘defeat’/catch up with China? On another note, I might have to disagree with you a bit about it not being essentially a second Cold War. What I mean by this is simply that it will and has already become an arms race of sorts regardless of whether it is as publicly broadcasted as the Cold War was (think about how it is already illegal to invest in AI companies operated in China).

      5. Totally agree, analytics is essentially the backbone of AI so it would make sense that they would be good at these sorts of tasks.

      Good work, sorta forgot to think about where the AI are coming from in my response (i.e., humans still need to create them).

      Dare I ask the point of graphs if AI are going to analyze the data anyways? In other words, would graphs themselves become obsolete?

Leave a Reply