Perhaps you've watched the movie DON'T LOOK UP. The main take away for me was that some things are so obvious that we tend to ignore it or suspend our belief in favour of Science or let's say we take a wait and see approach. The overall sentiment when it comes to debates about AI in education is that it is hear to stay and "You better get on board or risk being left behind". While this sentiment might be true it somehow does not encourage critical AI literacy, but fear.
I've been listening to a few seminars and webinars over the past few weeks and everyone appears to be singing from the same songbook. What happens is that most sing AI's praises while the small minority speak of the potential down-side of AI and the affect that it most likely will have on academic jobs should we begin to use AI as it is envisaged to be used.We have all heard about the disruption that it will cause in the job-market but we speak in academia as if we do not form part of that workforce.One of the challenges is the lack of specificity when we speak about AI in academia causing the lumping of all AI into the same category in the minds which is not helpful when it comes to the critical debate for or against its use. Alongside this comes the commonly stated phrase that "AI is here to stay and so we better embrace it." In South Africa we have the huge issue of the digital divide. It is not unique to us but the impact is more dire in developing countries. Such statements creates more of an AI divide on top of the digital divide between AI users and those who don't or at least those who knowingly and unknowingly use AI.
AI ethics policies attempts to put external guardrails in place to curtail bad use of AI but does little to address bad actors in AI which will require internal guardrails. AI ethics is showing us where the lines are morality and values is what keeps us from crossing it. Much of the discussions around AI in academia is limited to concerns with maintaining academic integrity as if the role of HE not also to be concerned with societal well-being, ethical as well as moral conduct of its citizens, especially in the SA context?
When computers were introduced the mantra was to not let technology drive the learning. It appears that with AI that's exactly what's happening. AI is changing us through our microscopic behaviours so that it is nearly indistinguishable from our everyday actions. We embrace it for the main purpose of saving time but in order for it to give us time it has to get it from somewhere. What do I mean? AI is trained on work/ data that required others time that they invested.
Finally there is the issue of decolonization and algorithmic biases. We can only address these matter through developing our own AI models. Publish more on the African continent or partner in publishing with more established authors from the global North The bottom line is that whether we decide to AI or not we have to use it wisely, and appropriately without doing harm or others in the process. It's been said numerous times that it will be the over-reliance on AI that will be our downfall. We cannot embrace AI for the sake of embracing new technology as AI is not a tool in the traditional sense of understanding the way tools work. Be safe, use your judgement, don't harm others. There will be collateral damage...I think it's a given. The value proposition is too big to ignore just like there will always be casualties with driving cars...we don't give up on driving.
Comments
Post a Comment