“Big data.” It’s in the air. For instance, IBM claims that “[e]very day, we create 2.5 quintillion bytes of data—so much that 90% of the data in the world today has been created in the last two years alone.” In 2008, Nature ran an interesting special on big data. And in the humanities as of late, there has been a slew of discussions about the variety, velocity, and volume of information. Consider the recent “Big Data” event at McGill University (organized by Stéfan Sinclair and Matthew Milner), the Stanford Literary Lab’s pamphlets series, and the cultural analytics work being done in the Software Studies Lab at…
Recent Posts
- Promoting Global Understanding and Multicultural Communication Through Arts-Based Research in ESL Classrooms
- Digital Pedagogy and Pentiment (2022): Playing with Critical Art History
- Syllabus Repository Update: AI and Writing
- An Interview with Dr. Aaron Mauro on Hacking in the Humanities: Cybersecurity, Speculative Fiction, and Navigating a Digital Future
- Introduction to Alex Mashny
- Call for Syllabi: Artificial Intelligence (AI) and Writing
- Introduction to Saurabh Anand
- Introduction to Anuj Gupta