All posts by Kristin Fontichiaro

About Kristin Fontichiaro

Kristin Fontichiaro is the principal investigator of this IMLS-funded project and a clinical associate professor at the University of Michigan School of Information.

FRED’s economic resources praised in today’s New York Times!

Economics reporter Neil Irwin wrote about his technology-related workflow in today’s New York Times. In, “Why ‘Fred’ Is the Best Friend of Economics Writers,” he says:

Economics is a topic full of data. What tools do you use to parse that data? And what sites or apps do you use to keep on top of the latest economic trends?

Every economics writer’s best friend is named Fred. It stands for Federal Reserve Economic Data, and it’s maintained by the Fed bank in St. Louis. It allows you to use a single interface to pull, at last count, 509,000 different data series from 87 different sources of economic and financial data.

A big part of the advantage is simply that once you’re familiar with the interface, which is intuitive, you don’t have to relearn the data retrieval tool for each statistical agency every time. So, for example, I write about the European economy only now and again, so I have to relearn how to use the Eurostat database every time if the data isn’t in Fred. That’s not for the faint of heart.

I generally use Microsoft Excel for data analysis, which is powerful enough to do most of the stuff I know how to do on my own. That’s to say, if a project requires a bigger data set or more complex statistical techniques than Excel can handle, I probably will need help from a colleague with more advanced programming skills anyway.

Or for a quick calculation of, say, percentage change I use a Texas Instruments scientific calculator I keep on my desk…

Congratulations, FRED team!

We were thrilled to have Katrina Stierholz from FRED and Charissa Jefferson of Cal State Northridge introduce FRED at our recent 4T Data Literacy conference. Check out the archived session here.

Aeon: AI, virtual assistants, and emotional intelligence

We’ve talked a lot in Data Literacy in the Real World: Conversations and Case Studiesand in our 2017 and 2018 conferences. Digital assistants like Amazon’s Alexa offer a cool device at an accessible price, but there’s so much more to unpack behind those high-tech cylinders.

Do you want to bare it all to a digital assistant? What happens when we outsource our emotional soothing to a machine? How are today’s devices being coded to reflect the culture of their users? These are some of the questions referenced in The Quantified Heart,” an essay on Aeon by Polina Aronson and Judith Duportail. Here are a few excerpts to whet your appetite for the entire essay:

[A]n increasing number of people are directing such affective statements, good and bad, to their digital helpmeets. According to Amazon, half of the conversations with the company’s smart-home device Alexa are of non-utilitarian nature – groans about life, jokes, existential questions. ‘People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind,’ an Apple job ad declared in late 2017, when the company was recruiting an engineer to help make its virtual assistant more emotionally attuned. ‘They turn to Siri in emergencies or when they want guidance on living a healthier life.’

Some people might be more comfortable disclosing their innermost feelings to an AI. A study conducted by the Institute for Creative Technologies in Los Angeles in 2014 suggests that people display their sadness more intensely, and are less scared about self-disclosure, when they believe they’re interacting with a virtual person, instead of a real one. As when we write a diary, screens can serve as a kind of shield from outside judgment.

Soon enough, we might not even need to confide our secrets to our phones. Several universities and companies are exploring how mental illness and mood swings could be diagnosed just by analysing the tone or speed of your voice … By 2022, it’s possible that ‘your personal device will know more about your emotional state than your own family,’ said Annette Zimmermann, research vice-president at the consulting company Gartner, in a company blog post …

[N]either Siri or Alexa, nor Google Assistant or Russian Alisa, are detached higher minds, untainted by human pettiness. Instead, they’re somewhat grotesque but still recognisable embodiments of certain emotional regimes – rules that regulate the ways in which we conceive of and express our feelings.

These norms of emotional self-governance vary from one society to the next … Google Assistant, developed in Mountain View, California looks like nothing so much as a patchouli-smelling, flip-flop-wearing, talking-circle groupie. It’s a product of what the sociologist Eva Illouz calls emotional capitalism – a regime that considers feelings to be rationally manageable and subdued to the logic of marketed self-interest. Relationships are things into which we must ‘invest’; partnerships involve a ‘trade-off’ of emotional ‘needs’; and the primacy of individual happiness, a kind of affective profit, is key. Sure, Google Assistant will give you a hug, but only because its creators believe that hugging is a productive way to eliminate the ‘negativity’ preventing you from being the best version of yourself …

By contrast, Alisa [a Russian-language assistant] is a dispenser of hard truths and tough love; she encapsulates the Russian ideal: a woman who is capable of halting a galloping horse and entering a burning hut (to cite the 19th-century poet Nikolai Nekrasov). Alisa is a product of emotional socialism, a regime that, according to the sociologist Julia Lerner, accepts suffering as unavoidable, and thus better taken with a clenched jaw rather than with a soft embrace. Anchored in the 19th-century Russian literary tradition, emotional socialism doesn’t rate individual happiness terribly highly, but prizes one’s ability to live with atrocity.

Alisa’s developers understood the need to make her character fit for purpose, culturally speaking. ‘Alisa couldn’t be too sweet, too nice,’ Ilya Subbotin, the Alisa product manager at Yandex, told us. ‘We live in a country where people tick differently than in the West. They will rather appreciate a bit of irony, a bit of dark humour, nothing offensive of course, but also not too sweet’ …

Every answer from a conversational agent is a sign that algorithms are becoming a tool of soft power, a method for inculcating particular cultural values. Gadgets and algorithms give a robotic materiality to what the ancient Greeks called doxa: ‘the common opinion, commonsense repeated over and over, a Medusa that petrifies anyone who watches it,’ as the cultural theorist Roland Barthes defined the term in 1975. Unless users attend to the politics of AI, the emotional regimes that shape our lives risk ossifying into unquestioned doxa …

So what could go wrong? Despite their upsides, emotional-management devices exacerbate emotional capitalism …These apps promote the ideal of the ‘managed heart’, to use an expression from the American sociologist Arlie Russell Hochschild …

Instead of questioning the system of values that sets the bar so high, individuals become increasingly responsible for their own inability to feel better. Just as Amazon’s new virtual stylist, the ‘Echo Look’, rates the outfit you’re wearing, technology has become both the problem and the solution. It acts as both carrot and stick, creating enough self-doubt and stress to make you dislike yourself, while offering you the option of buying your way out of unpleasantness . . .

[I]t’s worth reflecting on what could happen once we offload these skills on to our gadgets.

 

 

For discussion: Free facial recognition technology available to schools … do you want it?

Decorative. Graphic with some of the text that appears in this blog post

Wired has a fascinating article out about a newly-released and free facial recognition tool that, coupled with existing video monitoring, claims to keep schools safer. From the article by Issie Lapowski:

“RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems …

[F]acial recognition technology often misidentifies black people and women at higher rates than white men.

“The use of facial recognition in schools creates an unprecedented level of surveillance and scrutiny,” says John Cusick, a fellow at the Legal Defense Fund. “It can exacerbate racial disparities in terms of how schools are enforcing disciplinary codes and monitoring their students.”

Glaser … is all too aware of the risks of facial recognition technology being used improperly. That’s one reason, in fact, why he decided to release SAFR to schools first …

“I personally agree you can overdo school surveillance. But I also agree that, in a country where there have been so many tragic incidents in schools, technology that makes it easier to keep schools safer is fundamentally a good thing” …

Glaser approached the administrators at his children’s school . . . which had just installed a gate and camera system, and asked if they might try using SAFR to monitor parents, teachers, and other visitors who come into the school. The school would ask adults, not kids, to register their faces with the SAFR system. After they registered, they’d be able to enter the school by smiling at a camera at the front gate. (Smiling tells the software that it’s looking at a live person and not, for instance, a photograph). If the system recognizes the person, the gates automatically unlock. If not, they can enter the old-fashioned way by ringing the receptionist.

According to head of school Paula Smith, the feedback from parents was positive, though only about half of them opted in to register their faces with the system … It decided deliberately not to allow their students, who are all younger than 11, to participate, for instance …

For now, RealNetworks doesn’t require schools to adhere to any specific terms about how they use the technology. The brief approval process requires only that they prove to RealNetworks that they are, in fact, a school. After that, the schools can implement the software on their own. There are no guidelines about how long the facial data gets stored, how it’s used, or whether people need to opt in to be tracked …

There are also questions about the accuracy of facial recognition technology, writ large. SAFR boasts a 99.8 percent overall accuracy rating, based on a test, created by the University of Massachusetts, that vets facial recognition systems. But Glaser says the company hasn’t tested whether the tool is as good at recognizing black and brown faces as it is at recognizing white ones. RealNetworks deliberately opted not to have the software proactively predict ethnicity, the way it predicts age and gender, for fear of it being used for racial profiling. Still, testing the tool’s accuracy among different demographics is key. Research has shown that many top facial recognition tools are particularly bad at recognizing black women. Glaser notes, however, that the algorithm was trained using photos from countries around the world and that the team has yet to detect any such “glitches.” Still, the fact that SAFR is hitting the market with so many questions still to be ironed out is one reason why experts say the government needs to step in to regulate the use cases and efficacy of these tools.

“This technology needs to be studied, and any regulation that’s being considered needs to factor in people who have been directly impacted: students and parents,” Cusick says …

The question is whether it will do any good. This sort of technology, Levinson-Waldman points out, wouldn’t have stopped the many school shootings that have … been perpetrated by students who had every right to be inside the classrooms they shot up …

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition … “If the only people who are providing facial recognition are people who don’t give a shit about privacy, that’s bad.”

Some questions to think about or discuss with colleagues or students:

  1. What problems could this tool solve in your school?
  2. Is this a tool that solves the security and safety needs you see in your school?
  3. Does the company owner’s good intent serve as adequate reassurance that privacy will be protected?
  4. This project is being piloted in Wyoming, a state that was 90.7% white during the last Census, and University Child Development School in Seattle, whose home page is dominated by images of Caucasian children. Are these pilot sites representative of schools across America? Why or why not?
  5. Does the fact that the tool opts not to identify people by race diminish the likelihood that it will miscategorize people?
  6. Where will the data be kept, and what is the plan for data management?
  7. What is captured as visitors who do not smile or who are not in the system pass by the camera?
  8. Smiling has different meanings in different cultures. Does the requirement to smile (to show the face moving and distinguish the streaming image from a photograph) pose any anticipated benefits or challenges in your school?
  9. What benefits do you see in a school adopting this tool?
  10. What unintended consequences might arise from this tool?
  11. Is this a tool you would recommend to your school or district? Now? In a few years after further testing?
  12. What is one piece of advice you would give this company?

Read the full article here.

***UPDATED 7/25/2018: So … how do you feel about AI and photos of your kid’s summer camp?