Referenties

Aı̈meur, E., Amri, S., & Brassard, G. (2023). Fake news, disinformation and misinformation in social media: A review. Social Network Analysis and Mining, 13(1), 30.
Akata, Z., Balliet, D., De Rijke, M., Dignum, F., Dignum, V., Eiben, G., Fokkens, A., Grossi, D., Hindriks, K., Hoos, H., et al. (2020). A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer, 53(8), 18–28.
Amari, S. (1967). A theory of adaptive pattern classifiers. IEEE Transactions on Electronic Computers, 3, 299–307.
Arrieta, A. B., Dı́az-Rodrı́guez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcı́a, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Azevedo, A., & Santos, M. F. (2008). KDD, SEMMA and CRISP-DM: A parallel overview. IADS-DM.
Barbudo, R., Ventura, S., & Romero, J. R. (2023). Eight years of AutoML: Categorisation, review and trends. Knowledge and Information Systems, 65(12), 5097–5149.
Bird, C., Ungless, E., & Kasirzadeh, A. (2023). Typology of risks of generative text-to-image models. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 396–410.
Borys, K., Schmitt, Y. A., Nauta, M., Seifert, C., Krämer, N., Friedrich, C. M., & Nensa, F. (2023). Explainable ai in medical imaging: An overview for clinical practitioners–saliency-based xai approaches. European Journal of Radiology, 162, 110787.
Bostrom, N. (2015). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bostrom, N. (2020). Ethical issues in advanced artificial intelligence. Machine Ethics and Robot Ethics, 69–75.
Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI friend: How users of a social chatbot understand their human–AI friendship. Human Communication Research, 48(3), 404–429.
Brauner, P., Hick, A., Philipsen, R., & Ziefle, M. (2023). What does the public think about artificial intelligence?—a criticality map to understand bias in the public perception of AI. Frontiers in Computer Science, 5, 1113903.
Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. AI Magazine, 26(4), 53–60.
Buchanan, B. G., & Smith, R. G. (1988). Fundamentals of expert systems. Annual Review of Computer Science, 3(1), 23–58.
Campbell, M., Hoane Jr, A. J., & Hsu, F. (2002). Deep blue. Artificial Intelligence, 134(1-2), 57–83.
Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1. The Digital Competence Framework for Citizens. With Eight Proficiency Levels and Examples of Use. Publications Office of the European Union.
Cave, S., & Dihal, K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nature Machine Intelligence, 1(2), 74–78.
Chalmers, D. J. (2016). The singularity: A philosophical analysis. Science Fiction and Philosophy: From Time Travel to Superintelligence, 171–224.
Choudhary, T., Mishra, V., Goswami, A., & Sarangapani, J. (2020). A comprehensive survey on model compression and acceleration. Artificial Intelligence Review, 53, 5113–5155.
Chrisley, R. (2003). Embodied artificial intelligence. Artificial Intelligence, 149(1), 131–150.
Chu, X., Ilyas, I. F., Krishnan, S., & Wang, J. (2016). Data cleaning: Overview and emerging challenges. Proceedings of the 2016 International Conference on Management of Data, 2201–2206.
Cole, D. (2004). The Chinese Room Argument. In The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/chinese-room/; Metaphysics Research Lab, Stanford University.
Copeland, B. (2024a). Alan turing. https://www.britannica.com/biography/Alan-Turing
Copeland, B. (2024b). History of artificial intelligence (AI). https://www.britannica.com/science/history-of-artificial-intelligence
Darrach, B. (1970). Meet shaky, the first electronic person. Life, November, 58–68.
Deng, S., Zhao, H., Fang, W., Yin, J., Dustdar, S., & Zomaya, A. Y. (2020). Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet of Things Journal, 7(8), 7457–7469.
Dennett, D. (2017). From bacteria to bach and back: The evolution of minds. W. W. Norton & Company.
Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way (Vol. 2156). Springer.
Dı́az-Rodrı́guez, N., Del Ser, J., Coeckelbergh, M., Prado, M. L. de, Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy artificial intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
Dorrestijn, S. (2012). The product impact tool. Design for Usability Methods & Tools, 111–119.
Dorrestijn, S. (2024). Ethiek & technologie, maar dan praktisch. Met bijdragen van de leden van het saxion lectoraat ethiek & technologie. Ethiek & Technologie, Saxion University of Applied Sciences, Deventer. https://doi.org/10.5281/zenodo.12683806
Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., Qian, B., Wen, Z., Shah, T., Morgan, G., et al. (2023). Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9), 1–33.
Elkins, K., & Chun, J. (2020). Can GPT-3 pass a writer’s turing test? Journal of Cultural Analytics, 5(2).
Fast, E., & Horvitz, E. (2017). Long-term trends in the public perception of artificial intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31.
Frické, M. (2019). The knowledge pyramid: The DIKW hierarchy. Ko Knowledge Organization, 46(1), 33–46.
Fuegi, J., & Francis, J. (2003). Lovelace & babbage and the creation of the 1843 “notes.” IEEE Annals of the History of Computing, 25(4), 16–26.
Gartner, Inc. (21 August 2024). Gartner 2024 hype cycle for emerging technologies highlights developer productivity, total experience, AI and security. https://www.gartner.com/en/newsroom/press-releases/2024-08-21-gartner-2024-hype-cycle-for-emerging-technologies-highlights-developer-productivity-total-experience-ai-and-security.
Good, I. J. (1966). Speculations concerning the first ultraintelligent machine. In Advances in computers (Vol. 6, pp. 31–88). Elsevier.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.
Gu, J. (2024). Responsible generative ai: What to generate and what not. arXiv Preprint arXiv:2404.05783.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1–42.
He, X., Zhao, K., & Chu, X. (2021). AutoML: A survey of the state-of-the-art. Knowledge-Based Systems, 212, 106622.
Heckerman, D. (1998). A tutorial on learning with bayesian networks. Learning in Graphical Models, 301–354.
Hofstadter, D. R. (1999). Gödel, escher, bach: An eternal golden braid. Basic books.
Hospedales, T., Antoniou, A., Micaelli, P., & Storkey, A. (2021). Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 5149–5169.
Huxley, J. (2015). Transhumanism. Ethics in Progress, 6(1), 12–16.
IEEE Spectrum. (2008). Tech luminaries address singularity. https://spectrum-ieee-org.saxion.idm.oclc.org/tech-luminaries-address-singularity.
Ji, S., Pan, S., Cambria, E., Marttinen, P., & Philip, S. Y. (2021). A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 33(2), 494–514.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Jones, C. R., & Bergen, B. K. (2024). People cannot distinguish GPT-4 from a human in a turing test. arXiv Preprint arXiv:2405.08007.
Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4, 237–285.
Kerschke, P., Hoos, H. H., Neumann, F., & Trautmann, H. (2019). Automated algorithm selection: Survey and perspectives. Evolutionary Computation, 27(1), 3–45.
Kraus, M., Fuchs, J., Sommer, B., Klein, K., Engelke, U., Keim, D., & Schreiber, F. (2022). Immersive analytics with abstract 3D visualizations: A survey. Computer Graphics Forum, 41, 201–229.
Kreuzberger, D., Kühl, N., & Hirschl, S. (2023). Machine learning operations (mlops): Overview, definition, and architecture. IEEE Access, 11, 31866–31879.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25.
Kumar, Y., Koul, A., Singla, R., & Ijaz, M. F. (2023). Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. Journal of Ambient Intelligence and Humanized Computing, 14(7), 8459–8486.
Kurzweil, R. (2005). The singularity is near. In Ethics and emerging technologies (pp. 393–406). Springer.
Lasi, H., Fettke, P., Kemper, H.-G., Feld, T., & Hoffmann, M. (2014). Industry 4.0. Business & Information Systems Engineering, 6, 239–242.
LeCun, Y. (1998). The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
LeCun, Y., Jackel, L., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker, H., Guyon, I., Muller, U., Sackinger, E., et al. (1995). Comparison of learning algorithms for handwritten digit recognition. International Conference on Artificial Neural Networks, 60, 53–60.
Lee, E. A. (2020). The coevolution: The entwined futures of humans and machines. Mit Press.
Li, C., Gan, Z., Yang, Z., Yang, J., Li, L., Wang, L., Gao, J., et al. (2024). Multimodal foundation models: From specialists to general-purpose assistants. Foundations and Trends in Computer Graphics and Vision, 16(1-2), 1–214.
Liu, B. (2021). "Weak AI" is likely to never become "strong AI", so what is its greatest value for us? arXiv Preprint arXiv:2103.15294.
Liu, Y., Zhang, K., Li, Y., Yan, Z., Gao, C., Chen, R., Yuan, Z., Huang, Y., Sun, H., Gao, J., et al. (2024). Sora: A review on background, technology, limitations, and opportunities of large vision models. arXiv Preprint arXiv:2402.17177.
Luccioni, S., Jernite, Y., & Strubell, E. (2024). Power hungry processing: Watts driving the cost of AI deployment? The 2024 ACM Conference on Fairness, Accountability, and Transparency, 85–99.
Macal, C. M. (2016). Everything you need to know about agent-based modelling and simulation. Journal of Simulation, 10(2), 144–156.
Mara, M., Stein, J.-P., Latoschik, M. E., Lugrin, B., Schreiner, C., Hostettler, R., & Appel, M. (2021). User responses to a humanoid robot observed in real life, virtual reality, 3D and 2D. Frontiers in Psychology, 12, 633178.
Martı́nez-Plumed, F., Contreras-Ochando, L., Ferri, C., Hernández-Orallo, J., Kull, M., Lachiche, N., Ramı́rez-Quintana, M. J., & Flach, P. (2019). CRISP-DM twenty years later: From data mining processes to data science trajectories. IEEE Transactions on Knowledge and Data Engineering, 33(8), 3048–3061.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), 12–14.
McDermott, D. (2007). Artificial intelligence and consciousness. The Cambridge Handbook of Consciousness, 117–150.
Mitchell, M. (1998). An introduction to genetic algorithms. MIT press.
Modha, D. S., Ananthanarayanan, R., Esser, S. K., Ndirango, A., Sherbondy, A. J., & Singh, R. (2011). Cognitive computing. Communications of the ACM, 54(8), 62–71.
Molnar, C. (2022). Interpretable machine learning. Independently published.
Moor, J. (2006). The Dartmouth College artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87–91.
Newell, A., Shaw, J. C., & Simon, H. A. (1959). Report on a general problem solving program. IFIP Congress, 256, 64.
Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104.
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.-E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., et al. (2020). Bias in data-driven artificial intelligence systems—an introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356.
OpenAI. (2023). GPT-4 technical report. arXiv Preprint arXiv:2303.08774.
Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., & Wu, X. (2024). Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering.
Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E., & Launay, J. (2023). The RefinedWeb dataset for falcon LLM: Outperforming curated corpora with web data, and web data only. arXiv Preprint arXiv:2306.01116.
Pinar Saygin, A., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and Machines, 10(4), 463–518.
Qin, X., Luo, Y., Tang, N., & Li, G. (2020). Making data visualization more efficient and effective: A survey. The VLDB Journal, 29(1), 93–117.
Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. International Conference on Machine Learning, 1060–1069.
Rojas, R. (1997). Konrad zuse’s legacy: The architecture of the Z1 and Z3. IEEE Annals of the History of Computing, 19(2), 5–16.
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach (4th ed.). Pearson.
Schmidhuber, J. (2007). Gödel machines: Fully self-referential optimal universal self-improvers. In Artificial general intelligence (pp. 199–226). Springer.
Schreiner, M. (11 July 2023). GPT-4 architecture, datasets, costs and more leaked. https://the-decoder.com/gpt-4-architecture-datasets-costs-and-more-leaked/.
Schröer, C., Kruse, F., & Gómez, J. M. (2021). A systematic literature review on applying CRISP-DM process model. Procedia Computer Science, 181, 526–534.
Seaborn, K., Barbareschi, G., & Chandra, S. (2023). Not only WEIRD but “uncanny”? A systematic review of diversity in human–robot interaction research. International Journal of Social Robotics, 15(11), 1841–1870.
Sejnowski, T. J. (2023). Large language models and the reverse turing test. Neural Computation, 35(3), 309–342.
Shafique, U., & Qaiser, H. (2014). A comparative study of data mining process models (KDD, CRISP-DM and SEMMA). International Journal of Innovation and Scientific Research, 12(1), 217–222.
Shanahan, M. (2016). The Frame Problem. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2016). https://plato.stanford.edu/archives/spr2016/entries/frame-problem/; Metaphysics Research Lab, Stanford University.
Shi, Z., Yao, W., Li, Z., Zeng, L., Zhao, Y., Zhang, R., Tang, Y., & Wen, J. (2020). Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions. Applied Energy, 278, 115733.
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
Siemens, G., Marmolejo-Ramos, F., Gabriel, F., Medeiros, K., Marrone, R., Joksimovic, S., & Laat, M. de. (2022). Human and artificial cognition. Computers and Education: Artificial Intelligence, 3, 100107.
Tan, X., Chen, J., Liu, H., Cong, J., Zhang, C., Liu, Y., Wang, X., Leng, Y., Yi, Y., He, L., et al. (2024). Naturalspeech: End-to-end text-to-speech synthesis with human-level quality. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Turing, A. M. (1950). I.—Computing Machinery and Intelligence. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433
Van Engelen, J. E., & Hoos, H. H. (2020). A survey on semi-supervised learning. Machine Learning, 109(2), 373–440.
Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213–218.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
Vinge, V. (1993). Technological singularity. VISION-21 Symposium Sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, 30–31.
Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). Recaptcha: Human-based character recognition via web security measures. Science, 321(5895), 1465–1468.
Vorst, R. van der, & Kamp, J.-A. (2021). The importance of a free, open, online technology impact cycle tool. EUNIS’21.
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7–30.
Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.
Wikipedia contributors. (2024a). History of artificial intelligence — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=History_of_artificial_intelligence&oldid=1238546461.
Wikipedia contributors. (2024b). Moore’s law — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Moore%27s_law&oldid=1250511271.
Wikipedia contributors. (2024c). Roy amara — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Roy_Amara&oldid=1237425964.
Wikipedia contributors. (2024d). Timeline of artificial intelligence — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Timeline_of_artificial_intelligence&oldid=1240226624.
Wikipedia contributors. (2024e). Wheat and chessboard problem — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Wheat_and_chessboard_problem&oldid=1238847142.
Wolfe, G. (1998). The book of the new sun. SFBC.
Wu, C.-J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., Chang, G., Aga, F., Huang, J., Bai, C., et al. (2022). Sustainable ai: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4, 795–813.
Wu, X., Zhao, H., Zhu, Y., Shi, Y., Yang, F., Liu, T., Zhai, X., Yao, W., Li, J., Du, M., et al. (2024). Usable XAI: 10 strategies towards exploiting explainability in the LLM era. arXiv Preprint arXiv:2403.08946.
Yang, Li, & Shami, A. (2020). On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing, 415, 295–316.
Yang, Ling, Zhang, Z., Song, Y., Hong, S., Xu, R., Zhao, Y., Zhang, W., Cui, B., & Yang, M.-H. (2023). Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys, 56(4), 1–39.
Zha, D., Bhat, Z. P., Lai, K.-H., Yang, F., Jiang, Z., Zhong, S., & Hu, X. (2023). Data-centric artificial intelligence: A survey. arXiv Preprint arXiv:2303.10158.
Zhang, C., Zhang, C., Zhang, M., & Kweon, I. S. (2023). Text-to-image diffusion models in generative ai: A survey. arXiv Preprint arXiv:2303.07909.
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., et al. (2023). A survey of large language models. arXiv Preprint arXiv:2303.18223.
Zhou, C., Li, Q., Li, C., Yu, J., Liu, Y., Wang, G., Zhang, K., Ji, C., Yan, Q., He, L., et al. (2023). A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv Preprint arXiv:2302.09419.