Human agency in the age of algorithms
演算法時代的人類能動性
「一個簡化、單一且相對整齊的數學模型被提出,作為處理複雜混亂人類行為的精簡解決方案。」
Artificial intelligence (AI) and machine learning (ML) have become pervasive terms, found in ever-expanding contexts and applications. These include tasks such as simulating board games, generating texts and predicting human behaviour.
人工智慧(AI)和機器學習(ML)已成為普遍使用的術語,出現在日益擴展的情境和應用中。這些應用包括模擬棋盤遊戲、生成文本以及預測人類行為等任務。
Machine learning algorithms are mathematical computations that carry out specific calculations when given well-defined problems with a set of loss functions (parameters for estimating downsides or costs). Building mathematical models of behaviour, action and preference using algorithms first requires framing these human phenomena as well-defined problems with a set of optimization functions (criteria for enabling selection of the best outcome from a range of alternatives). These computations take inputs, perform well-defined calculations, and return outputs. More importantly, however, we can only optimise for what we can codify and measure.
機器學習演算法是數學運算,當給定明確定義的問題和一組損失函數(用於估計損失或成本的參數)時,會執行特定計算。使用演算法建立行為、行動和偏好的數學模型,首先需要將這些人類現象框定為明確定義的問題,並設置一組優化函數(用於從多種選擇中選擇最佳結果的標準)。這些運算接受輸入,執行明確的計算,並返回輸出。然而,更重要的是,我們只能優化那些能夠編碼和測量的事物。
In practice, at its core, ML is a process, however remote it may seem to be, of exhaustively mapping, automating, controlling, manipulating – and thereby directly or indirectly influencing – the physical, psychological and social world.
在實務上,機器學習的核心是一個過程,無論看起來多麼遙遠,其本質是徹底地繪製、自動化、控制、操縱——從而直接或間接地影響物理、心理和社會世界。
Most importantly, the attempt to map, formalise and automate (one aspect at a time, in a piecemeal manner) does not come from any desire to develop a meaningful understanding of ‘the human condition’. Often, the intent to control, manipulate, influence and predict future behaviours and actions is the major driving force. This line of thinking tends to see the human condition as a problem that needs to be solved. Sometimes, attempts to map, formalise and automate are intended to make the human redundant. The reductive, simplistic, yet relatively neat mathematical model is offered as a streamlined and cost-effective solution for dealing with messy complex human behaviour.
最重要的是,試圖繪製、形式化和自動化(一次只針對一個面向,零碎地進行)並非出於對「人類狀況」有意義理解的渴望。通常,控制、操縱、影響和預測未來行為與行動的意圖才是主要驅動力。這種思維傾向將人類狀況視為一個需要解決的問題。有時,繪製、形式化和自動化的嘗試是為了使人類變得多餘。簡化、單一且相對整齊的數學模型被提出,作為處理複雜混亂人類行為的精簡且具成本效益的解決方案。
Computer vision, for example, utilises visual input such as image and other video data in order to measure, map, formalise and effectively structure and influence human cognition, behaviour and interactions, the physical environment, and eventually the social fabric. Every aspect of daily life is tracked, monitored, recorded, modelled and surveilled – from our geographic position to our facial expressions, from our body movement to our breathing, scratching, gait speed and sleep patterns – via the traces we leave, such as radio signals, location or video data.
例如,電腦視覺利用影像和其他視訊資料等視覺輸入,來測量、繪製、形式化並有效地結構化及影響人類的認知、行為與互動、物理環境,最終影響社會結構。日常生活的每一個面向都被追蹤、監控、記錄、建模和監視——從我們的地理位置到臉部表情,從身體動作到呼吸、抓癢、步態速度和睡眠模式——透過我們留下的痕跡,如無線電訊號、位置或視訊資料。
Such mapping, formalizing, monitoring and surveilling of human behaviour and the physical and social environment is done under the guise of convenience. ‘Frictionless’ transactions, ‘smart’ home assistants and fitness tracking systems, for example, are presented as tools that make daily life seamless. Such technologies, and the normalisation of these practices, at the same time normalise surveillance, along with the underlying, mistaken assumption that human affairs and the social world are something that can be entirely mapped, controlled and predicted and the human condition is a problem that can and needs to be solved.
這種對人類行為以及物理和社會環境的繪製、形式化、監控和監視,是以便利為名義進行的。例如,「無摩擦」交易、「智慧」家庭助理和健身追蹤系統,都被呈現為讓日常生活無縫銜接的工具。這些技術,以及這些做法的常態化,同時也使監視成為常態,並伴隨著一個根本錯誤的假設:人類事務和社會世界是可以被完全繪製、控制和預測的,而人類的處境是一個可以且必須被解決的問題。
These attempts to measure, codify and control human behaviour and social systems are both futile and dangerous.
這些試圖衡量、編碼和控制人類行為及社會系統的嘗試既徒勞又危險。
Human cognition 人類認知
Treating human cognition in mechanistic terms, as static and self-contained, tends to mistakenly equate persons with brains. Focus on the brain as a seat of cognition has led to overemphasis on abstract symbol manipulation and on mental capabilities. Although the brain is important, humans are more than brains. Furthermore, the brain itself is a complex, dynamic and interactive organism that is not yet fully understood.
以機械論的方式看待人類認知,將其視為靜態且自我封閉,往往錯誤地將人等同於大腦。將大腦視為認知的中心,導致過度強調抽象符號操作和心理能力。雖然大腦很重要,但人類不僅僅是大腦。此外,大腦本身是一個複雜、動態且互動的有機體,尚未被完全理解。
We know that humans are not brains that exist in a vat, nor do we exist in a social, political and historical vacuum. People are embodied beings that necessarily emerge in a web of relations. Human bodies are marked, according to cognitive scientists Ezequiel Di Paolo and colleagues, by ‘open-ended, innumerable relational possibilities, potentialities, and virtualities.’ As living bodies, which themselves change over time, we are compelled to eat, breathe, sometimes fall ill, fall in love and so on. We necessarily have points of views, moral values, commitments, and lived experiences. We are sense-making organisms that relate to the world and to others in ways that are significant to us. Excitement, pain, pleasure, joy, embarrassment and outrage are some of the feelings that we are compelled to feel by virtue of our relational existence.
我們知道,人類並非存在於試管中的大腦,也不生活在社會、政治與歷史的真空中。人是具體的存在,必然在一張關係網絡中浮現。根據認知科學家 Ezequiel Di Paolo 及其同事的說法,人類身體被「無限多樣、開放式的關係可能性、潛能與虛擬性」所標記。作為活生生的身體,且自身隨時間改變,我們必須進食、呼吸,有時生病、墜入愛河等等。我們必然擁有觀點、道德價值、承諾與生活經驗。我們是賦予意義的有機體,以對我們重要的方式與世界及他人建立關係。興奮、痛苦、快樂、喜悅、尷尬與憤怒,是我們因關係存在而必然感受到的一些情緒。
Human beings, according to Di Paolo, are not something that can be finalised and defined once and for all, but are always under construction and marked by ambiguities, imperfections, vulnerabilities, contradictions, inconsistencies, frictions and tensions. Human ‘nature,’ if we can say anything about it at all, is the continual struggle for sense-making and resolving tensions. In short, due to these ambiguities, idiosyncrasies, peculiarities, inconsistencies, and dynamic interactions, human beings are indeterminable, unpredictable and noncompressible (meaning that they cannot be neatly captured in data or algorithms).
根據 Di Paolo 的說法,人類並非一成不變、可以被最終定義的存在,而是持續在建構中,並且充滿了模糊性、不完美、脆弱性、矛盾、不一致、摩擦與張力。若要談論人類的「本性」,那就是不斷努力尋求意義並解決張力的過程。簡言之,因為這些模糊性、特異性、獨特性、不一致性以及動態互動,人類是無法被確定、無法預測且無法壓縮的(意指無法被簡單地用數據或演算法捕捉)。
Furthermore, social norms and asymmetrical power structures permeate and shape our cognition and the world around us. This means that factors such as our class, gender, ethnicity, sexuality, (dis)ability, place of birth, the language we speak (including our accents), skin colour and other similar subtle factors either present opportunities or create obstacles in how a person’s capabilities are perceived.
此外,社會規範與不對稱的權力結構滲透並形塑了我們的認知以及周遭的世界。這表示像是我們的階級、性別、族群、性取向、(身心)障礙、出生地、所說的語言(包括口音)、膚色以及其他類似的微妙因素,都會在一個人的能力被看待時,帶來機會或造成阻礙。
The fact that humans and the social world at large are not something that can be neatly mapped, formalised, automated or predicted does not stop researchers, in big tech and startups alike, from putting forward tools and models that reductively or misleadingly claim to sort, classify and predict aspects of human behaviour, characteristics and actions.
人類及整個社會世界並非能被整齊劃一地繪製、形式化、自動化或預測的事實,並未阻止大科技公司和新創企業的研究人員提出工具和模型,這些工具和模型以簡化或誤導的方式聲稱能分類、歸納並預測人類行為、特徵和行動的某些面向。
Humans are not machines and machines are not humans
人類不是機器,機器也不是人類
Simplistic views of human cognition are not new. Historically, the brain has been compared with the some of the most advanced inventions or ideas of the time. At the height of the industrial revolution, the steam engine served as an apt metaphor for the brain. Go back a few thousand years and we find that the heart (not the brain) was seen as the central organ for cognition/thinking. Our paradigms and metaphors are reflections of the time and not naturally given facts. From the 1970s, new powerful metaphors have come to pervade: for example, the brain as information processing machine metaphor. This is neither an argument against metaphors nor an attempt to deny that there can be parallels between brains and computers.
對人類認知的簡化觀點並非新鮮事。歷史上,大腦常被比擬為當時最先進的發明或理念。在工業革命鼎盛時期,蒸汽機成為形容大腦的恰當隱喻。再往前幾千年,我們會發現心臟(而非大腦)被視為認知/思考的核心器官。我們的範式和隱喻反映的是當時的時代背景,而非天生的事實。自 1970 年代起,新的強力隱喻開始盛行,例如將大腦比作資訊處理機器。這既不是反對隱喻的論點,也不是否認大腦與電腦之間可能存在類比的嘗試。
Metaphors are an important tool for understanding complex concepts. However, the problem arises when we forget metaphors are just that: metaphors. As the evolutionary geneticist Richard Lewontin contends, ‘We have become so used to the atomistic machine view of the world that originated with Descartes that we have forgotten that it is a metaphor. We no longer think, as Descartes did, that the world is like a clock. We think it is a clock.’
隱喻是理解複雜概念的重要工具。然而,問題在於當我們忘記隱喻僅僅是隱喻時,就會產生誤解。正如演化遺傳學家理查德·勒溫廷所主張的:「我們已經習慣了笛卡爾所創始的原子機械世界觀,以至於忘記了那只是一種隱喻。我們不再像笛卡爾那樣認為世界『像』一座鐘錶,而是認為它『就是』一座鐘錶。」
That is, we have come to think of ourselves in terms of machines. And conversely, to think of machines as humans. In order to do so we have reduced (and at times degraded) complex social and relational behaviour to its simplest form while at the same time elevating machines to a status at or above the human, as impartial arbiters of human knowledge. As researchers Alexis Baria and Keith Cross (2021) put it, ‘the human mind is afforded less complexity than is owed, and the computer is afforded more wisdom than is due.’ When we see ourselves in machinic terms, human complexities, messiness and ambiguities are seen as an inconvenience that get in the way of ‘progress’ rather than a reality, part of what it means to be human. Similarly, from the perspective of the autonomous vehicle developer, when human behaviour comes into conflict with what AI models predict, human pedestrian behaviour, for example, is seen as an irrational anomaly and dangerous.
也就是說,我們開始以機器的角度來看待自己。反過來,也將機器視為人類。為了達成這點,我們將複雜的社會與關係行為簡化(有時甚至貶低)到最基本的形式,同時將機器提升到與人類同等甚至更高的地位,視其為人類知識的公正仲裁者。正如研究者 Alexis Baria 和 Keith Cross(2021)所言,「人類心智被賦予的複雜度少於其應有的,而電腦被賦予的智慧卻超出其應有的。」當我們以機械的角度看待自己時,人類的複雜性、混亂與模糊性被視為阻礙「進步」的不便,而非人類本質的一部分。同樣地,從自動駕駛車輛開發者的角度來看,當人類行為與 AI 模型的預測發生衝突時,例如行人行為就被視為非理性的異常且危險。
Current advanced technologies such as generative models that produce ‘human-like’ texts, images or voices are treated as authors, poets or artists. State-of-the-art models such as ChatGPT (and its variants), Stable Diffusion and MidJourney can produce impressive outputs with carefully curated prompts. ChatGPT, for example, can write rap lyrics, write code and even essays when given unambiguously framed prompts. Similarly, Stable Diffusion and MidJourney can produce images in the style of a given artist. It is important to note that these models also produce outputs that are harmful (descriptions of how adding crushed porcelain to breast milk is beneficial), discriminatory (treating race as a determining factor for being a good scientist, in which the white race ends up superior), or where it generates plausible sounding outputs that falsely appear as facts (when such facts are non-existent).
當前先進技術如生成模型能產出「類人」的文字、圖像或聲音,並被視為作者、詩人或藝術家。最先進的模型如 ChatGPT(及其變體)、Stable Diffusion 和 MidJourney,能透過精心設計的提示產生令人印象深刻的成果。例如,ChatGPT 能在明確的提示下撰寫饒舌歌詞、編寫程式碼甚至撰寫論文。同樣地,Stable Diffusion 和 MidJourney 能以特定藝術家的風格創作圖像。值得注意的是,這些模型也會產生有害的內容(如描述將碎瓷片加入母乳有益的說法)、歧視性的內容(將種族視為成為優秀科學家的決定因素,且白人種族被視為優越),或生成聽起來合理但實際上是虛假的資訊(當這些所謂的事實根本不存在時)。
It is a mistake to treat these models as human, or human-like. Putting issues such as art forgery and plagiarism aside, what these models are doing is generating text and image outputs based on text and image data ‘seen’ previously. Human creativity, be it production of a piece of art or music, is a process that is far from input → output. Instead, it is characterised by struggles, negotiations and frictions. We do things with intent, compassion, worry, jealousy, care, wit, humour. And these are not phenomena that can be entirely measured, datafied, mapped or automated. There is no optimization function for love, compassion or suffering. In the words of computer scientist Joseph Weizenbaum, considered one of the fathers of modern artificial intelligence, ‘No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.’
將這些模型視為人類或類似人類是錯誤的。撇開藝術偽造和抄襲等問題不談,這些模型所做的是根據先前「看過」的文字和圖像數據生成文字和圖像輸出。人類的創造力,無論是創作一件藝術品還是音樂,都是一個遠非輸入→輸出的過程。相反,它的特徵是掙扎、協商和摩擦。我們做事帶有意圖、同情、擔憂、嫉妒、關懷、機智和幽默。而這些現象無法被完全衡量、數據化、映射或自動化。愛、同情或痛苦沒有優化函數。用被視為現代人工智慧之父之一的電腦科學家 Joseph Weizenbaum 的話說:「沒有其他生物,當然也沒有電腦,能以人類的方式面對真正的人類問題。」
Reflecting on the lyrics ChatGPT has produced ‘in the style of Nick Cave’, Nick Cave wrote in January 2023 that:
回顧 ChatGPT 以「Nick Cave 風格」創作的歌詞,Nick Cave 於 2023 年 1 月寫道:
Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self… it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery… This is what we humble humans can offer, that AI can only mimic.(www.theredhandfiles.com/chat-gpt-what-do-you-think)
寫出一首好歌不是模仿、複製或拼貼,而是相反的。這是一種自我毀滅的行為,摧毀了過去所有努力創造的成果。正是那些危險、令人屏息的突破,將藝術家推向超越自我認知界限的境地……這是與自身脆弱、危險與渺小的無畏對峙,與突如其來的震撼發現感相抗衡……這正是我們謙卑的人類所能提供的,而人工智慧只能模仿的。(www.theredhandfiles.com/chat-gpt-what-do-you-think)
演算法時代的人類處境
Generative, classification and prediction models have become commonplace, deployed in public and private spaces such as airports, offices and homes in major cities across the globe, monitoring, tracking and predicting our purchases, our movements and our actions. Search engines selectively direct information, political advertising and news towards us (based on profiles they have assigned us, often without our knowledge); recommender systems nudge our next steps; and even seemingly simple and benign applications like spellcheck tend to determine what ‘correct’ use of language is. From political processes such as law enforcement, border control and aid allocation to very personal activity such as dating and the monitoring of our health, algorithmic systems mediate, influence and direct decision-making to varying extents, often with little or no mechanism for transparency or accountability.
生成式、分類和預測模型已成為普遍現象,部署於全球主要城市的公共及私人空間,如機場、辦公室和家庭,監控、追蹤並預測我們的購買行為、行動軌跡和舉動。搜尋引擎根據他們為我們建立的(通常在我們不知情的情況下)個人資料,有選擇地引導資訊、政治廣告和新聞;推薦系統則推動我們的下一步行動;甚至看似簡單且無害的應用程式,如拼寫檢查,也傾向於決定何謂「正確」的語言使用。從執法、邊境管制和援助分配等政治程序,到約會和健康監測等非常個人的活動,演算法系統在不同程度上調解、影響並引導決策,且往往缺乏透明度或問責機制。
As we have seen, human behaviour and social relations are non-determinable, in constant flux, and therefore non-predictable and not compressible into data or models. Yet, our every move, our behaviours and actions are continually, ubiquitously tracked, surveilled, datafied, modelled and nudged in certain directions and away from others. Rather than effectively capture non-determinable, non-predictable and complex human behaviour, what these models end up doing is modelling and amplifying societal norms, conventions and stereotypes – automating the status quo. As a result, models that often fail to work properly, or models that do not work for all, or worse, models that automate the status quo and are by definition discriminatory, continue to be built and deployed. Directly or indirectly, these models serve as constraints, tools that limit options, opportunities, choices, agency and freedom.
正如我們所見,人類行為與社會關係是無法決定的,處於不斷變動之中,因此無法預測,也無法壓縮成數據或模型。然而,我們的每一個舉動、行為和行動卻持續且無所不在地被追蹤、監控、數據化、建模,並被引導向某些方向而遠離其他方向。這些模型並未有效捕捉無法決定、無法預測且複雜的人類行為,反而最終是對社會規範、慣例和刻板印象進行建模並加以放大——自動化現狀。因此,那些經常無法正常運作、或無法適用於所有人的模型,甚至更糟的是,那些自動化現狀且本質上具有歧視性的模型,仍然被持續建構和部署。這些模型直接或間接地成為限制條件,成為限制選項、機會、選擇、行動力與自由的工具。
Obvious consequences of this include a gradual invasion of privacy, the normalisation of surveillance, simplification of human complexity, reduction of agency, and the perpetuation of negative stereotyping and other forms of injustice. In most cases, the fact that these technologies influence critical decisions and constrain opportunities remains hidden. Most surveillance technology producers operate in the dark and frequently go out of their way to hide their existence. One knock-on effect of this is that as we become aware that we are being watched, monitored and predicted, we alter our behaviour accordingly, or ‘game the models’. Automation of content moderation on social media platforms, for example, has forced minoritized communities who are disproportionately censored to alter their language in order to discuss topics without their content getting flagged or removed. This alteration has come to be known as ‘algospeak’. With the awareness that most recruiters and employers use automated filtering algorithms to screen CVs, there is now a flourishing market providing advice on how to write CVs that can pass algorithmic screening.
明顯的後果包括隱私逐漸被侵入、監控的常態化、人類複雜性的簡化、行動能力的減少,以及負面刻板印象和其他形式不公正的延續。在大多數情況下,這些技術影響關鍵決策並限制機會的事實仍然隱藏著。大多數監控技術的生產者在暗中運作,且經常刻意隱藏其存在。這帶來的一個連鎖反應是,當我們意識到自己正被監視、監控和預測時,我們會相應地改變行為,或「操控模型」。例如,社交媒體平台上內容審核的自動化,迫使被過度審查的少數族群改變語言,以便在不被標記或刪除內容的情況下討論議題。這種改變被稱為「算法語言」(algospeak)。隨著大家意識到大多數招聘者和雇主使用自動篩選算法來審查履歷表,現在出現了一個蓬勃發展的市場,提供如何撰寫能通過算法篩選的履歷表的建議。
For the most part, market monopoly, wealth accumulation and power to dominate are the central drives behind mass data collection and model building for big tech corporations, powerful institutions and start-ups alike. Questions of social impact, negative downstream effects, and accountability are obfuscated or diverted as irrelevant by those in the position to build and deploy these scientifically and ethically questionable models. This situation marks the rise of unrivalled power and influence concentrated in few, wealthy hands that have neither the experience nor the expertise to understand the issues, nor the interest to alleviate them. Consequently, models that sort, classify, influence and predict are free to nudge human behaviour and social systems towards values that align with these central drives – wealth and power. But not, for example, justice.
在大多數情況下,市場壟斷、財富積累以及支配權力,是大型科技公司、強大機構和新創企業進行大量數據收集與模型建構的核心驅動力。社會影響、負面後果以及問責問題,往往被那些有能力建構和部署這些在科學與倫理上存疑模型的人士模糊處理或視為無關緊要。這種情況標誌著無與倫比的權力與影響力集中在少數富有者手中,而這些人既缺乏理解問題的經驗與專業知識,也無意願去緩解這些問題。因此,這些用於分類、排序、影響和預測的模型,得以自由地引導人類行為和社會系統朝向符合這些核心驅動力——財富與權力——的價值觀發展,但卻不包括例如正義等價值。
As a result, democratic processes have been wrecked; people have been wrongfully arrested; disenfranchised data workers exploited; people denied medical care, loans, jobs, mortgages and welfare benefits; underprivileged students’ exam results have been downgraded; artists plagiarised; communal violence promoted (with dire results from death to political instability); the list goes on.
因此,民主程序遭到破壞;人們被錯誤逮捕;被剝削的無權數據工作者;人們被拒絕醫療照護、貸款、工作、房貸和福利;弱勢學生的考試成績被降級;藝術家遭到抄襲;社區暴力被推動(造成從死亡到政治不穩定的嚴重後果);這個清單還在繼續。
Designers, producers and vendors of algorithmic models go out of their way to evade responsibility and accountability. Without proper accountability mechanisms, for example, to screen and assess deployed models and ensure ethical consideration in the design of those that are being built, and without the regulatory frameworks that protect individuals and communities that are most impacted by algorithmic systems, we continue to replicate these injustices at a mass scale.
演算法模型的設計者、生產者和供應商不遺餘力地逃避責任和問責。舉例來說,若沒有適當的問責機制來篩選和評估已部署的模型,並確保在設計正在建構的模型時考慮倫理問題;若沒有保護受演算法系統影響最深的個人和社群的監管架構,我們將持續大規模複製這些不公正現象。
References 參考文獻
Baria, A. T., & Cross, K. (2021). The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor. arXiv preprint arXiv:2107.14042. Di Paolo, E. A., Cuffari, E. C., & De Jaegher, H. (2018). Linguistic bodies: The continuity between life and language. Cambridge MA: MIT Press.Lewontin, R. (1996). Biology as ideology: The doctrine of DNA. Toronto: House of Anansi.
Lewontin, R. (1996)。生物學作為意識形態:DNA 的教條。多倫多:House of Anansi。
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. WH Freeman.
Weizenbaum, J. (1976)。電腦力量與人類理性:從判斷到計算。WH Freeman。
Abeba Birhane
Abeba Birhane Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). Her interdisciplinary research sits at the intersections of embodied cognitive science, machine learning, complexity science and decoloniality theories. Her work includes audits of computational models and large scale datasets. Birhane is a Senior Fellow in Trustworthy AI at Mozilla Foundation and an Adjunct Assistant professor at the school of computer science and statistics at Trinity College Dublin, Ireland.Abeba Birhane Abeba Birhane 是一位認知科學家,研究人類行為、社會系統以及負責任且具倫理的人工智慧(AI)。她的跨領域研究位於具體認知科學、機器學習、複雜性科學與去殖民理論的交叉點。她的工作包括對計算模型和大規模資料集的審核。Birhane 是 Mozilla 基金會可信賴 AI 的高級研究員,並擔任愛爾蘭都柏林三一學院電腦科學與統計學院的兼任助理教授。
© Abeba Birhane