week 2 web design
This week marks the first week of our workshop. We are all a bit unfamiliar with the digital practice-related content at the moment.
We started with web page creation. Although I had taken relevant courses and created some simple web pages during my undergraduate years,
I haven't practiced much since then, and the web design software I used back then was different. As a result, the initial process of
setting up the web page was a bit bumpy. However, the subsequent code writing and using web inspection tools to view and modify the
web page went relatively smoothly. During this process, I also had some thoughts. Web inspection tools, on the one hand, can help web
page creators conveniently view the source code and make changes when problems occur. They are also beneficial
for us web page creation learners to refer to and learn from high-quality web page source codes. But at the same time, on the other
hand, through web inspection tools, we can also change the code to alter the presentation of the web page, including text, images,
and all web page content. Although this is a superficial change and the web page won't have any substantial changes after refreshing
and saving, for some people with ulterior motives, they can use this method to modify the web page and take screenshots to deceive
some ordinary people who are not familiar with this field, which may lead to some adverse consequences. Therefore, we need to use
this technology carefully and correctly.
week 3 webscrap
This week, based on our existing understanding of web page code, we further studied web scraping technology. Web scraping refers
to the use of automated programs to extract data from websites. It is a common method in data science, market research, public
opinion analysis, AI training, and news investigation. Through various web scraping tools, we can more efficiently find the code
we need and better count the information on web pages. From a critical perspective, web scraping seemingly realizes information
freedom, but in fact, it intensifies the inequality of data power - large institutions dominate data collection with resources
and algorithms, while individuals and small websites passively become "data sources". The selection and filtering during the
scraping process also hide biases, determining which content is considered valuable and which is excluded. Moreover, scraping
often infringes on privacy, ignores contextual consent, and incorporates users into surveillance systems without their knowledge.
It not only reflects the plundering of information by digital capitalism but also the contemporary society's reliance on
"scrapability": only what can be read by machines is regarded as existing. Therefore, web scraping is not just a technical issue,
but a convergence point of knowledge, power, and ethics - it requires us to rethink the balance between information openness and
individual dignity in the data age.
week 4 Data Collection
This week, we delved into data-related knowledge and learned that for the same topic, different perspectives require different
data, and the data collection thinking process varies accordingly. Data is never neutral. Researchers "produce" a specific data
reality through problem setting, classification methods, participant selection, and other steps. Data collection is closely
related to power - who defines the research question, what is considered "use", and whose voices are included or excluded all
shape the data itself. Our group was assigned scenario 3, which is a research-led approach as an academic researcher at a
UK-based university, interested in learning about how students use generative AI as part of their daily lives. Our research
goal was to understand usage habits. At the beginning of the research, we spent a considerable amount of time analyzing and
discussing our positioning and topic focus. Since generative AI is a broad concept and students' daily lives are also a vast
scope, we eventually narrowed it down to students' homework, study, and writing. After that, we analyzed the aspect of "how
students use", considering the meaning of "use" in terms of frequency, purpose, whether students indicate their use of AI
behavior patterns, the most commonly used software, attitudes towards use and its impact, etc. We ultimately selected the
five most important questions to design the questionnaire. Through this process, I also discovered that it is very difficult
to design questions that are specific enough to generate meaningful data while remaining neutral and non-judgmental, especially
when it comes to issues of academic integrity. As a researcher, you must balance research interests and participant protection,
and not make students feel like they are being "checked for cheating". From the perspective of data collection, students are
also reluctant to reveal their true thoughts and usage habits when it involves academic integrity.
week 5 data visualization
This week, we made a visual analysis of the questionnaire data collected last week, mainly through pie charts and line charts in Excel,
but due to the small number of questionnaires collected, we found that it was difficult to draw general conclusions, and less useful
information was collected. In the face of different visual audiences, we chose to The content of the visualisation will also be
different, and the conclusions may be different. Therefore, visualisation is both analytical and interpretive.Charts do not simply
“show” data — they mediate it. They turn abstract patterns into persuasive arguments.
week 6 Identity, Algorithmic Identity, and Data
This week, we learned what the algorithm is and its impact on our identity recognition on social media. Our algorithmic identity will
change dynamically over time. First of all, we find the privacy permission content on the ins platform, understand what information
the algorithm will collect and analyse about us, and then constantly infer our identity and preferences, and recommend new content
that we may be interested in, especially some advertising and marketing. Our algorithm identity is a class that we have never selected.
Don't shape simplified and commercialised advertising objects. Although the algorithm can derive our identity and preferences by
analysing the equal quantitative data of our likes and collections, it sometimes makes mistakes in positioning, because our true
emotions cannot be calculated, and our preferences have been changing all the time.
After that, we classified the content published by our social media friends according to the sumpter classification method.
In the process, I was confused about which category of selfie content should be classified. In order to unify the classification
standards, I set some rules, but this also made the classification results very ruthless like a machine, like simulating the
algorithm's positioning of the user identity, which made me realise that the algorithm identity had been built from the beginning
of classification, and this identity was not real.
week 7
In this week’s workshop and class discussion, I realised that negative prompting is not simply a technique for avoiding unwanted
narrative elements, but a way of interacting with the predictive structure of generative AI. When I first attempted to guide the
model, my instructions were overly specific and strongly negative, which paradoxically made the model produce exactly what I
ttempted to exclude. Only when I loosened my phrasing did the system begin to respond differently. This practical experience
directly echoed the idea raised in class that generative AI does not process negation as “absence,” but as another form of
input within a probabilistic system.
This unexpectedly led me to think about Munster’s idea of computational experience. Instead of imagining myself as a “user”
giving precise orders, I became aware that meaning was produced in the relational space between my language and the model’s
probabilistic tendencies. The “result” was not simply generated by the model nor controlled by me, but emerged from the
shifting relation we negotiated through prompting.
week 8
During the Kirkgate Market workshop,we are asked to explore human-food ecologies through sensory experiences and simple
digital recordings. When I follow the taste in the market instead of following the route, the mixed flavours of seafood,
fried chicken, braised rice and Indian restaurants form a multi-layered space. I no longer regard food as a static
cultural object, but realise that "perception" itself is a relational and more-than-human experience, which also responds
to the discussion on ecological entanglement, geography and multi-species relationship in the workshop.
What impressed me most was how digital records reshaped the senses. When I tried to record the "taste" by taking videos and
photos, I realised that the digital medium not only failed to really preserve the smell, but also changed the way I experienced
the smell. For example, the fishy smell of seafood and the smoke of fried chicken cannot appear in the picture at all, but the
"recorded action" itself becomes a part of the sensory experience. In this sense, digital media not only records the senses,
but also participates in life.
Walking along the smell also reminds me of the more-than-human food ecology: what is food before it becomes food? Which sea
does it come from? Who breeds, fishes and transports? Each flavour points to a history far beyond the market space. When I
smelled the seafood, I realised that the "invisible" ocean was actually introduced to the scene through the smell, which
was the same as the workshop asking "who is here and who is absent?" The questions are highly relevant.
Looking back, I realised that I was not "recording the market", but forming a new way of experience through digital media,
body and food. I am no longer a bystander, but a part of the digital-food-body ecology. Seafood, equipment, smell and
shooting action form a process of intertanglement and intra-relating.
week 9
In this week's Creative Hacking class, we use Arduino to perceive our body temperature. When I was connecting circuits
and uploading code, I realised that the body was translated into a technical system that could only be recognised
by data and sensors in the process. The Workshop document reminds us that this is not to "learn technology", but
to use digital perception as a way to understand and experience the body. In this process, my body is no longer
just a subjective feeling, but is presented in the form of numbers, lights and circuits.
This experience also reminds me of Forlano's discussion of "hacking the body": the body is not a fixed biological unit,
but is constantly redefined with the intervention of technology. When I touched the sensor with my hand and observed
the changes in the LED light, I realised that my body was not only "in me", but was scattered in wires, circuits and
software. Physical feeling becomes a kind of "reading", not just feeling. This conversion from the inside of the body
to the outside of the digital body makes me understand the body as a hybrid interface - both biological and technical
(Forlano 2016).
week 10
In this week's class on interactive narrative, our group created a story of "Magic Market Sales Opportunity in the
Middle of the Night", and this creative process made me rethink the meaning of digital media narrative. At that
time, Jordan's view of "postdigital storytelling is a research method" has also become clear in practice, because
our narrative is no longer fixed information, but a way for players to think about desire, risk and price through
"whether to buy opportunities". The experimental space of the value. In this sense, the core of interactive
narrative is not to tell more content, but to let the audience jointly generate emotional and ethical results
through their own actions.