
11:45
Hello everyone :-)

12:16
Hello :)

12:23
Hi, everyone! Sorry my camera is broken.

12:59
Hi!

13:01
Hi everyone!

13:02
Hello! This is abdulhakim Saidu from Malaysia.

13:08
Hi everyone.

13:51
hello

13:53
Hello, everyone. Good morning from Brazil)

14:34
Hello! This is Lizzy from China. Good evening

16:20
Good afternoon from Ireland!

17:35
yes

17:38
Yes.

17:47
try to stop the sharing and do it again

17:55
yap

19:42
Could you tell us what websites?

19:50
We may need to all shut off videos so Kay can present.

20:08
Good afternoon from France

20:23
mmodalityleeds.wordpress.com

20:33
https://multimodalforum.org/

20:33
https://mmodalityleeds.wordpress.com/2020/12/11/programmes-multimodality-talks-series/

21:28
I believe the online format makes these wonderful meetings accessible to all of us!

21:45
Thanks

21:55
Hello everyone!

23:15
I'm sorry, but I have a question: can I have access to the recording of this session later?

23:46
we’ve just posted the links to the websites where we normally publish the recordings (just wait for a couple of days after the talk)

24:18
Thanks a million 🙏

40:26
yes

56:20
👍

01:01:42
Excellent work and talk. Does the MAP only have the terrorism data you used, or can users upload/add their own data for processing?

01:04:43
Excellent event

01:06:25
Could these approaches to analysis help us better motivate analytical concepts and frameworks within SF MDA - e.g. show whether the metafunctions are both intrinsically and extrinsically motivated for other modes as Halliday has argued they are for language (based on probabilities and the extent to which choices for construing experiential meanings in grammar constrain those for, say, interpersonal meaning)?

01:06:49
Could we get this presentation ?

01:07:11
it’s being recorded (if you scroll up in the chat you’ll see the link where the recording will be published)

01:08:29
kindly resend the link where the recording will publish

01:08:51
https://mmodalityleeds.wordpress.com/2020/12/11/programmes-multimodality-talks-series/

01:09:09
thanks

01:10:10
Sentiment

01:12:37
Sentiment analysis really struggles with stance and context, style/register, etc. Twitter is famous for irony, and for complex satire and meme play… what could sentiment analysis have to say about Twitter data that would be useful? I can imagine Twitter users in Liverpool saying “yeah that was brilliant”

01:14:55
https://multimodalforum.org/2021/06/04/artificial-intelligence-and-multimodality-from-semiotics-to-intelligent-systems/Online Workshop organised by the University of Cambridge and University College London on‘Artificial Intelligence and Multimodality: From Semiotics to Intelligent Systems’When: Mon, 14 June 202112:30 – 19:00 BSTWhere: online eventRegistration: https://www.eventbrite.co.uk/e/artificial-intelligence-and-multimodality-tickets-154438418467

01:16:54
the presentation was really fantastic.

01:17:49
Thanks for the talk, Kay! Kids are calling here. Happy weekend to all!

01:18:01
I'm amzed with this presentation!

01:18:06
Great, thanks - does anyone have a link to the MAP?

01:18:07
I wonder how we bring in the CDA aspect to understand the data?

01:18:11
I am an architect, and I would be interested in using Multimodality to analyse my data for understanding the user’s embodied experience of space, any suggestions/guidance on how/where to start?Because I haven’t come across the use of Multimodality in architectural spaces/environment.Thank you.

01:20:18
is it possible to work on cognitive-functional integrated framework?

01:20:23
Can the image processor in MAP process illustrations or paintings?

01:27:56
how long do we have for the questions?

01:28:40
Lovely to hear your talk Kay - thanks so much carey x

01:28:46
You can either type your question or raise your hand for a brief comment or question. We have another 10 minutes to discuss these

01:31:58
Thank you from Australia look forward to seeing the recording.

01:32:46
Thanks Prof Kay for the great talk, as always, very informative and inspiring. Here is my question, when you are doing the early and late fusion of different modalities, do you apply any ideas of intersemiosis to supervise the fusion? or just let the patterns emerge from the fusion without any supervision?

01:32:53
Thanks for fantastic presentation, Kay! If there’s time for one more question: If the idea is to ‘teach’ computers to analyse big data multimodally, have you already noticed any surprising “learning” happening? I’m curious about the extent to which the SF MDA framework that AI learns may be a departure from what we’ve ‘taught’; for example, one of the things we in SF MDA/ SFL value is the fuzziness of our categories…

01:32:56
Thank you this has been really interesting

01:34:02
let Emilia's question go first

01:34:44
Many thanks Kay for your fascinating and thought-provoking presentation!! :-)

01:35:26
What about the ISIS paper/s - for Kay and collaborators’ early work in that direction?

01:36:23
I mean critical SF MDA analysis of ISIS propaganda materials

01:36:53
Hi, thank you for the great talk. I have a question about the coding and annotation. In one of your journal articles (2016), a Wikipedia’s category structure was used for providing contextual information. In the projects that you introduced in today’s talk, did you also use similar structure to facilitate the coding and annotation of objects in images? or leave them to the computer vision tools to do the work automatically?

01:37:15
Thanks to Kay for having given us this excellent talk. I believe it will take me time to get familiar with this platform and the theories. Can I ask if it would be possible to bring some essential knowledge on this into my class of language and communication? If so, could you please introduce some begging books to use?

01:37:31
*begging = beginning

01:38:16
Maree Stenglin

01:38:37
This has been a very interesting presentation and discussion. I have to leave for a meeting. Thank you so much.

01:43:45
I see, thanks.

01:49:06
Thank you for an extremely interesting presentation!

01:49:53
Thank you for a wonderful talk!

01:50:47
thank you for the wornderl seminar

01:51:13
Thank you very much for the interesting talk!

01:56:05
Thanks for the reply, Kay. I will begin from reading those papers.

01:57:36
Thank you for the very interesting talk

01:58:14
Thank you Kay and everyone for this session! We look forward to seeing you at our next meeting on the 18th June 2021 for a talk by Dr Chiao-I Tseng, University of Bremen, Germany & Dr Emilia Djonov, Macquarie University, Sydney, AustraliaDiscourse Semantics of Time and Transmedia Narratives for ChildrenTo attend the talk, register here: https://www.eventbrite.se/e/chiao-i-tseng-emilia-djonov-discourse-semantics-of-time-and-tickets-132782256303

02:00:11
Thank you.

02:00:24
I enjoyed a lot

02:00:27
Thank you so much Dr Kay for insightful talk!

02:00:33
thanks a million

02:00:43
Thank you very much for interesting talk och discussion!

02:00:59
Thank you for this fascinating talk!

02:00:59
Thank you very much for the talk, from Argentina

02:00:59
Thanks for the great talk

02:01:06
A great pleasure to attending this wonderful session. Thank you

02:01:08
Thanks for the great organization

02:01:09
Thanks very much for the fantastic talk!

02:01:20
Thank you so much for your fascinating talk and work!

02:01:24
cheers online

02:01:25
Thank you for this great talk!

02:01:26
Thanks again to Kay and the event organizers.

02:01:26
have a good time