This explainer was written in response to colleagues’ requests to know more about temporal bias in AI ethics. It begins with a refresher on cognitive biases, then dives into: how humans understand time, time preferences, present-day preference, confidence changes, planning fallacies, and hindsight bias.
Bias is a really big topic, but I’ll try to succinctly define a subsection of it—implicit cognitive bias—in a way that is useful for AI ethics, particularly. Humans have cognitive biases, which means every one of us, to varying degrees, holds beliefs and impressions that are not backed up by fleshed out reasoning or evidence, or that we never bothered questioning in the first place.¹ These beliefs and impressions are the result of past experiences, which are often shaped by the culture we’re seeped in from birth, or from our upbringing, for example.
Biases can also be implicit, in that they are present in our brain but arise independently from any conscious cognition. With regard to temporal bias, it may manifest in the following way: for instance, we prefer engaging in a fun activity now instead of a week, or even an hour, later for no explicit reason.
(There are also more explicit biases, where an individual has a conscious opinion or belief about something, someone, a group of people, etc. For example, I am explicitly biased towards chocolate ice cream, and so I choose it over other ice cream flavours, because it is my favourite. This is not the kind of bias I will be addressing here.)
If something is “temporal,” then it is related to time. Time is one of these subjects that, on the surface, is extremely simple, but can get really complex, really fast if you start questioning even the most basic concepts. I say time is, on the one hand, simple because we teach children in elementary school how to tell time on a clock and why telling time is so important. On the other hand, look at Being and Time by Martin Heidegger, and suddenly the question of time takes an unexpected turn. Time is not universal and linear anymore, but rather specific to and shaped by events and beings.
Temporal Bias #1: How We Understand Time
So why these remarks on the concept of time? Why does it matter, for temporal bias, that time can become a complicated subject? It matters because these other ways of understanding time show bias on our part—we are biased in that what most of us understand as “time unqualified” is in fact only one way of making sense of time. The Gregorian calendar we are now accustomed to is not the first nor the only calendar that ever existed—before it came the Julian calendar (for Julius Caesar). The months weren’t divided the same, and had different names. When the French Revolution took place, the revolutionaries created their own calendar, named their own months (with unbelievably poetic names), and changed the dates. The Chinese New Year is not celebrated on January 1st because it is based on another calendar, the traditional Chinese calendar.
For one, a number of Indigenous peoples in Australia and North America (as well as some non-indigenous peoples) have a circular understanding of time. This can succinctly be described as follows:
Aboriginal people do not perceive time as an exclusively ‘linear’ category (i.e. past—present—future) and often place events in a ‘circular’ pattern of time according to which an individual is in the centre of ‘time-circles’ and events are placed in time according to their relative importance for the individual and his or her respective community (i.e. the more important events are perceived as being ‘closer in time’).²
It’s important to keep in mind that this is a very brief summary that generalizes the beliefs of many individuals and groups, and so it is only partially accurate.³ However, I believe this is extremely helpful in showcasing how biased we are when it comes to our understanding of time.
It’s also crucial to note that when I say “biased,” it is not necessarily a synonym for “incorrect”—meaning “false” or “the wrong answer.” To be (implicitly) biased implies that we are going about our lives operating on unexamined and unquestioned beliefs, impressions, and reactions. This all comes prior to whether your initial position is correct or not. It’s not particularly correct or incorrect to understand time as linear, and the same is true of understanding time as cyclical. Regardless, it’s biased to think that understanding time as linear is the only way, or the only correct way, to do so. That is incorrect. Biases often lead us to holding incorrect beliefs and assumptions, which can fuel harmful behaviour, which can then turn into harmful systems within our societies.
This perception or belief that the only way to understand time, or to accurately understand it, is through the idea of linear time, is to me a form of temporal bias. However, it’s not what most people understand by the term “temporal bias.” Going through the topics of bias and temporality in the way I just did, although unorthodox, allowed me to do two things that I believe are crucial. 1) I was able to give (hopefully) clear partial definitions of bias and temporality, and 2) it allowed me to introduce another way of understanding how we might be “temporally biased,” which hopefully helps you fight your own biases a little better. I won’t specifically relate this form of temporal bias to AI ethics, as I believe it’s much more interesting and impactful for readers to ponder how a shift in how they understand time affects their way of being and their relationship to the world.
So, what do people usually mean when they talk about temporal bias? Good question. Once again, the information I offer here is—you’ve guessed it—biased. It’s a result of the resources I encountered, which are at least a partial result of whom I spend my time with, my prior beliefs and values, and more.
Temporal Bias #2: Time Preference
My impression of what most people refer to when they talk about temporal bias is time preference, where rewards (like money, or food) are seen as more valuable when acquired now rather than later.⁴ This is just one way someone can be temporally biased, but it is still relevant to AI ethics in many ways.
For example, we (individuals working on AI ethics) may be tempted to work on issues that we see as solvable in the shortest amount of time. If a dataset is biased because 85% of the photographs it is comprised of are of White men, then the quickest fix may be to inject images of individuals who are non-White, and of individuals who are of other genders. If one can make such a dataset that is “representative” (I’m really oversimplifying here), then great, problem solved, right? But beyond the fact that there doesn’t seem to exist a perfectly representative dataset that is completely “unbiased,” our solution leaves a much larger issue practically untouched: the fact that we started with a dataset that was mostly made up of pictures of White men. In other words, we haven’t addressed the much larger, more difficult, and much more long-term issues of racism and sexism. Why? Because the reward of solving a specific problem was bound to be much closer in time if the issue we choose to work on is a single dataset rather than dismantling systemic racism, sexism, and white supremacy.⁵
Now, this is only one example of how time preference (one of the forms temporal bias can take, in my view) can affect our work in AI ethics. But you can imagine any “reward” in the field, any problem we find a solution to, and think about how you might prefer to focus on that problem simply because the reward of solving the problem will be closer in time—not because solving the issue will bring more good into the world.
Temporal Bias #3: You Prefer Present-day Humans
Most of the information I’ve personally encountered related to temporal bias has been through the Effective Altruism (EA) movement. Now, let me just pause right here and acknowledge that there are some pretty strong opinions about EA as well as within EA itself. Many seem to either buy into EA 100%, or reject its approach vehemently. This appears to me as particularly true when people encounter EA through one of its trademark positions on AI: that artificial general intelligence (AGI) is a serious threat that we must work on now if we are to avoid possible disasters like human extinction.⁶ With regard to this, I’ve seen very few people take the middle ground, or not have strong feelings/opinions about it. Personally, I’m very skeptical of AGI being a risk, especially one we should be dedicating serious resources and time to right now. I’ve done some reading about the issue, and discussed this with a few people, but I am in no position to tell others that they should or should not believe those who consider AGI to be an imminent threat. In other words, I don’t have any serious arguments to move my skepticism to a fully fledged position I stand by. Hence if you’re curious, I invite you to, like me, look into the debate for yourself and see what you find (more information and links on this later).
Anyway (forgive the long-winded tangent), all this is to say that I don’t endorse every single aspect of EA, but I do think the movement has brought to light some fascinating and impactful information and ways of doing things. For instance, I’ve greatly benefited from the idea that we’re biased towards acting to influence the lives of people in the present, but there may be an opportunity to do just as much (or even more) good by trying to improve the lives of those who will live in the future.
For instance, if I value improving the lives of others, and I believe that all human beings have the same value and thus all equally deserve to have their lives improved, then why would I choose to focus on helping those who are currently alive over those who may be born in 200 years?⁷ And if it is the case that, through my actions today, I can do more good for people who will be born in 200 years, then it seems like 1) I’m incorrect in my initial belief that I should focus on work that will help people living on Earth with me today, 2) it may be morally wrong for me to continue doing that, as I may be knowingly inhibiting more good from being done, and 3) I might have kept trying to help people who are currently alive (or not even bothered wondering if these are the people I should be focusing on) because I have a temporal bias. That is, without seeking (or despite being presented with) evidence and reasoning that tells me it’s better (or even just equally good) to focus on doing good for people who will be born 200 years from now, I continue to work on making the lives of people in the present better.
To me, this seems pretty significant to AI ethics. For one, if you are convinced you can do more or equal good by focusing on AI risks that humans may face in 200 years, then you have good reasons to be working on these issues, and not on other, more short-term issues. In this scenario, what you choose to work on to help future humans also greatly depends on what you think the future will look like in 200 years. Will we not only have autonomous cars, but will they fly? Will we still have police, and will they be using facial recognition technology? Will we be faced more directly with the threat of AGI?
This is where my tangent on the EA movement becomes important. Some individuals have come to the conclusion that they can do more good (or at least prevent more harm from happening) by working on ensuring that AGI is aligned with human goals and/or values (that is, AGI doesn’t enslave us, cause our extinction, etc.), even if these risks are not ones we face right then and now. While this position is still somewhat rare in the broader AI ethics movement⁸, a number of people that would be characterized as very smart per our society’s metrics are working on this issue and have been writing and speaking about its importance.⁹ Once again, I leave it up to the reader to decide for themselves if the arguments are convincing.¹⁰
Regardless, you may be temporally biased if you do not examine the possibility that working on issues that will primarily benefit future generations, or if after exploring the subject and encountering convincing arguments and evidence in favour of work that will benefit future humans, you disregard it and continue to work on issues that mostly benefit present-day humans.
Temporal Bias #4: Confidence Changes¹¹
Confidence changes highlight a bias towards being more confident about events in the far future than those closer in time.¹² This temporal bias is quite wide, and is thus applicable to many scenarios inside and outside AI ethics. For example, as a team of researchers discuss a future AI model they would eventually like to build, the success of the project seems almost guaranteed. Not only will the model have high accuracy, it’s also going to be robust enough for real-world uses, and won’t be biased against any demographic or identifiable group. In essence, everything will work out fine. But then, as the team gets ready to start the research process, potential problems suddenly seem much more likely, and thus require a significant shift in the researchers’ approach and timeline for the project. This may lead to a subpar AI model if the overconfident researchers did not plan enough time to address the challenges that will arise, some of which are bound to lead to ethical issues if not addressed properly.
Temporal Bias #5: Planning Fallacy
As humans, we have a tendency to significantly underestimate the amount of time a given activity will take: this is the planning fallacy.¹³ The consequences of the shift in approach and timeline I mention in the section above regarding confidence changes may very well be made worse by the fact that even if everything was to go according to plan and no problems arose, the plan was already unrealistically optimistic in light of the amount of work to be done. As a result, the team of researchers may feel compelled to attempt quick fixes, or not to test the model as extensively as they should for unfair biases or marked inaccuracies for inputs coming from an identifiable group.
Temporal Bias #6: Hindsight Bias
Lastly, there is the hindsight bias: once an event’s results are known, we mistakenly believe the result was obvious and that we knew what the result was going to be all along.¹⁴ This bias can come into play with ethics and AI in the following way. Looking at the field of AI now, with hindsight bias it may seem almost inevitable that AI systems were going to become so advanced and prominent in our societies. AI is simply another step in the right direction. This narrative surrounding AI: its inevitability, its status as the frontier of progress for humankind, its possibilities regarding the enhancement of human abilities and societies—and how this narrative makes us forget the bumpy history of AI and stops us from questioning whether AI is or will always be a positive for humanity—can push us to not only overlook the harms and disadvantages brought upon by AI, but to continue developing it even if it brings more harm than good to most people.
Hopefully, this was an interesting introduction to temporal bias, and to bias more generally. Just remember that even if you do the work, find the evidence, and think through your biases … other biases are definitely lurking.
¹ Greenwald, A. G., & Krieger, L. H. (2006). Implicit bias: Scientific foundations. California law review, 94(4), 945-967.
² Janca, A., & Bullen, C. (2003). The Aboriginal concept of time and its mental health implications. Australasian Psychiatry, 11(1_suppl), S40-S44.
³ If, like me, you’ve grown up in an almost exclusively Western culture where time is limited to minutes, days, and years, it’s tempting to (consciously or not), read about Indigenous knowledge and ways of knowing (about time and about other topics) and treat them as a mere interesting alternative to the “real” or “correct” way of understanding time or other topics. I invite you to actively challenge that reflex or assumption. This familiarity with Western ideas of time (and consequent unfamiliarity with typically Indigenous or non-Western ideas of time) is only a testament to the strength of my, and perhaps your, biases.
⁴ Doyle, J. R. (2013). Survey of time preference, delay discounting models. Judgement and Decision Making, 8(2), 116–135.
⁵ This is one reason one might say that identifying biased datasets as the only or even the main issue we should be working on is misguided.
⁶ Not everyone in EA believes this, but many of the people who believe this are also EAs or are affiliated with the movement.
⁷ There are many considerations that may come into play here, and convince you that attempting to ameliorate the lives of present humans. These include discounting, uncertainty about what good really is, uncertainty about being able to do good in the future, the chances you believe humanity has of still being around in x amount of years, and more.
⁸ I should say that individuals working on AGI risks don’t usually characterize themselves as being part of the AI ethics movement, as AI ethics is typically focused on present or very near future risks.
⁹ To be fair, there are also many conventionally smart people who are not swayed by the arguments surrounding AGI.
¹¹ The next three sections are inspired by the temporal biases presented in Sanna, L. J., & Schwarz, N. (2004). Integrating Temporal Biases: The Interplay of Focal Thoughts and Accessibility Experiences. Psychological Science, 15(7), 474–481. https://doi.org/10.1111/j.0956-7976.2004.00704.x Thank you to Mo Akif for bringing this paper to my attention.
¹² As defined in Sanna & Schwarz, 2004.
¹³ As defined in Sanna & Schwarz, 2004.
¹⁴ As defined in Sanna & Schwarz, 2004.