Top ten tips for accessible social media

Social media platforms are a great way to communicate, but many posts aren’t accessible to users with visual or hearing impairments or loss. The good news is that it is relatively easy to ensure that our content is accessible to everyone. Here are my top ten tips.

1. Use alt text for images

Alternative text (alt text) is a description of an image that can be read by screen readers used by people who are blind or partially sighted. The description enables them to build a mental picture of the image. You don’t need to include every detail, just the key components that will help create the picture. You might find my post on audio description helpful.

2. Use simple fonts

Many platforms allow you to use a range of different fonts and mix symbols with letters. This can be a lot of fun but can often make text difficult to read for those with visual impairments and can render it completely unreadable by screen readers. Stick to simple sans-serif fonts for your text but also your name / handle.

3. Capitalise hashtags

Camel with the phrase CamelCase aligned with its two humps.
Image credit: Silver Spoon CC BY-SA 2.0

Hashtags are used to identify key words and phrases with the hash symbol, e.g. #Accessibility. They aren’t case sensitive and are often written in lower case but this makes it difficult for screen readers to distinguish the individual words in a phrase. Instead, use ‘CamelCase’ by capitalising each word, e.g. #AccessibleSocialMedia. This way screen readers will be able to convert the text to audio accurately.

4. Use emojis sparingly

Emojis are great fun and often useful in expressing what we mean when we have to stick to a character limit. Be aware that screen readers will read them each in turn, so use them sparingly rather than repeating them for emphasis.

5. Be careful with colour

When working with colour in fonts charts and graphics, contrast is as important as the colours themselves. Although red might seem an obvious choice for providing emphasis, it is low contrast which can make it harder to read for those with a visual impairment. Red/green colour deficiency is also the most common type, so avoid these colours in combination. Similarly, a spectrum of colours might seem a good choice for a graph but may in fact make elements harder to distinguish for some. Instead, use a narrower colour palette or tonal theme.

6. Use bold for emphasis

We often use italicised text to highlight words or phrases but for many this is harder to read. Instead, use a bold typeface for emphasis.

7. Add captions to video content

Adding captions to your video content ensures that it is accessible to a wide range of users. Open captions are ‘burned’ into the video, and so will be visible on any platform, but users can’t switch them off. Closed captions can be be switched on or off by users. Unfortunately, unlike alt text for images, most social media platforms don’t make it easy to add captions. Most video editing software has a facility to add captions, but this can be time consuming if there is a lot of dialogue. If you upload a video to YouTube, captions will automatically be generated. You can then edit these. This can be a quick way to ensure your video is accessible.

8. Make use of platform-specific advice

It is not always obvious from the user interface, but each of the main social media platform offer advice and tools to make content accessible.

Twitter: Advice on making images more accessible from the Help Centre. Follow @TwitterA11y for updates.

Facebook: accessibility pages from the Help Centre.

Instagram: guidance on alt text from the Help Centre.

WhatsApp: FAQs about accessibility

YouTube: YouTube Fundamentals: Accessibility

TikTok: guidance on accessibility from the Help Centre.

9. Test before you post

Always preview or test your content before you post it online, checking that fonts, colours, alt text etc. work first. Computer and mobile phone software usually include a text-to-speech feature which you can use to preview content.

10. Listen to your users

It’s really important to get feedback from users about how accessible your content is. Ideally, following the principle of ‘nothing about us without us’, users with particular needs should have a say in how content is produced. This will help us all produce better quality content. Here are some links to useful guidance:

RNIB accessibility guidance

Stroke Association accessible information guidance

Sense guidance on accessible social media

I hope you find these tips useful. If you think I have missed anything, please let me know and I will happily include it.

Image credit: Flickr / Stacey MacNaught, www.staceymacnaught.co.uk, CC BY 2.0

A safer internet: four steps to check reliability

I wrote this post for Safer Internet Day 2021, but the advice applies at any time. The theme was ‘An internet we trust: exploring reliability in an online world’. You can find out more, and download resources for different age groups, on the Safer Internet Centre website.

At a time when both teachers and children are working remotely over the internet, it is more crucial than ever that we can rely on the information, and sources of that information, that we encounter online.

In my role within the Education Team at the Bodleian Libraries, I deliver sessions on academic study skills, including evaluating online sources. You can find resources for this (and other course topics) on the OxLibris website. The guidance is aimed at students researching for the Extended Project Qualification (EPQ) and other coursework, but it applies to any online research.

1. What is the purpose of this site?

Ask yourself why the information has been put online. What is the intention of those who created the site, or posted the information? Is it to inform or educate? To entertain? Perhaps it’s to persuade or promote a particular opinion or point of view? Maybe the aim is marketing: to sell a product.

While information about the site can be useful in deciding this, it is also useful to look at any adverts on the site. While many sites will carry adverts that are unconnected with the information, ask yourself if the advertisers could be influencing the site content. This could be directly by paying for the content, or indirectly because the site avoids publishing anything that they think advertisers will not like.

2. Who has provided this information?

There are several ways we can find out who is behind a website and the information it contains. A well-run website should make it easy for users to find this out.

  • An ‘About’, ‘About Us’ or ‘Who We Are’ tab in the menu or navigation bar is a good place to start. This should provide information about who is behind the site and their reasons for creating it.
  • Contact details can often be found on a ‘Contact Us’ page or link. Ideally, this should provide as much contact information as possible, not just a web form or email address. Look for a registered telephone number and a postal address.
  • Organisational information. If the site is a business or charity this should be obvious. In the UK, companies, including non-profit companies, should be registered. Their company number will allow you to look up details of the company and its directors. Charities should be registered with the Charities Commission. Their registration number should be present and can be checked to verify their charitable status.
  • If the website collects information about you, perhaps through a sign-up form for news updates, it should include a Privacy Notice saying how they will use any personal information they collect, the legal basis for processing your information, how you can opt out, and who you can contact if you have any questions about this.
This homepage has an ‘About us’ tab, contact details, charity and company registrations, and a link to a privacy notice. Image credit: fullfact.org

If you can’t find this information, ask why this might be. While it may just be poor web design, it could be because the creators want to remain obscure.

3. Are there references for primary sources?

A primary source of information is written by the person who first produced the data, information, idea or opinion. Websites often summarise this information. This may be to disseminate it more widely, or to make it more accessible for non-specialists. It may also be to support the author’s own ideas, or to place a particular slant on the original information. In extreme cases, it can be used to create ‘fake news’ by surrounding a kernel of fact with misinformation.

You should be able to trace the original source of information by looking for references. These should detail where to find the source, which could be a book, published article, news item, or online publication. Wherever possible a link to the source should be included.

It’s important to check sources, especially for controversial topics. This enables us to verify not only whether the information is accurate but also whether it has been interpreted in an accurate and unbiased way. It’s particularly important to check sources for claims made on social media.

4. Has the information been fact checked?

Following up references enables us to check facts, but this isn’t always possible, particularly for fast-developing news stories, or information communicated over social media platforms. Fortunately, information may already have been fact-checked. A number of organisations have arisen in recent years to meet the need for objective fact-checking of claims that are made online and in the news.

When using such sites, we need to be as careful as we would with any site over who is running it and why, because some sites which claim to be objective in fact promote a particular viewpoint. Two sites which are both independent and reliable are fullfact.org and fact-check.org.

Full Fact is a UK-based charity and non-profit company that provides a fact-checking service for topical news items.

Fact Check is based in the United States and focuses on US politics, although there is some coverage of international topics.

A particular favourite of mine is politifact.com. While its scope is largely restricted to US politics, I do enjoy the six-point ‘truth-o-meter’ ratings it gives to statements, which range from ‘True’ for verified facts, to ‘Pants on Fire’ to outright fabrications!

When attempting to navigate the myriad of information available online, taking a little time to go through these four steps will go a long way to ensuring that the sources of information you use are reliable.

If you are interested in online safety, you might like my post on Lessons from a Ransomeware Attack.

Moving to Remote Working

I have a new-found respect for my ancient laptop, having just moved to home working as part of efforts to limit the transmission of COVID-19.

I work in the Education team at The Bodleian, the library of the University of Oxford. We work with visiting schools, teaching about the collections and exhibitions. Like many organisations, in response to government guidance the library has closed to visitors (although online services remain available) and all staff who can have moved to remote home working.

Much of my last day at work was spent preparing for this. With a background in school teaching, I had not had much experience of this (schools generally like you to be with the pupils you’re teaching) but I brought my chunky laptop to work to set it up.

There are a plethora of tools to assist remote working, but the team chose to use those most readily available. To some extent this was determined by those acceptable for use within the university, but that did have the advantage of support from the ICT services team. I think this is an important point. There is almost too much advice on what tools to use, with plenty of opinion on which are the best. What matters, particularly when quick set up is needed, is those which are available and for which you have good support. So while the options we chose – MS Teams – was good for a team used to using Outlook and Microsoft Office applications, for a group used to using, say, Google applications it would be better to choose tools which integrated with that suite.

Old but still got it!

Given the age of my Toshiba 660 laptop, it’s obsolete operating system, limited RAM, and hard drive already bursting at the seams, I approached setting up with some trepidation, concerned that it would no longer be supported, or might just fall over under the strain! In the end I need not have worried. For the record (and to make me seem much more tech-savvy than I actually am) the process involved:

  • Installing Cisco AnyConnect Client (fortunately there was one available for Windows 7)
  • Connecting to a VPN
  • Mapping network drives I would need to access
  • Connecting using the appropriate security credentials
  • Downloading and installing Microsoft Teams and linking with the relevant work teams

That went very well at work. Admittedly, at one point I began to doubt that I knew how to spell my own name, let alone All the passwords I had to juggle (no, DON’T use just one!) and I did have to make one call to a very calm and collected IT services engineer (thank you) but generally it was much more straightforward than I had feared.

A pity, then, that when I got home nothing worked! I remapped the drives on the advice of colleagues who had similar problems, but it turned out to be an issue with the VPN pathway. When I sorted that it all came to life. Well, apart from having to reinstall the MS Teams app the first time I tried to use it. After that it worked like a dream; admittedly a slightly flaky dream where things judder a bit occasionally and there’s a slight delay in most actions, but things worked acceptably.

I didn’t find the MS Teams layout particularly intuitive at first, but once I got the hang of it, everything seemed to do what it was meant to, so we’re happily messaging and even doing video team meetings (sorry about the neon running top colleagues). I like the way it integrates with other MS features like outlook calendars, contacts and OneDrive.

I have to say though, what I’m most pleased about is the performance of my nine year old laptop, on its second battery, with it’s ten year old operating system and Office 2007 applications. It makes you wonder whether the shiny new hardware and expensive upgrades that are pushed at us are really worth it. A bit like me, there may be newer, slimmer models available, but there’s life in the old dog yet!

Adventures in Whole Class Feedback: Planning for Feedback

I have been interested in the claims made for whole class feedback for some while, but have had some reservations. I have always seen formative assessment as a central element of teaching and learning, and providing written (as well as verbal) feedback as crucial to helping children understand what they have done well and what they need to do to improve further. I also quite like marking and enjoy both the immediate reaction of children to seeing their hard work appreciated, and their longer term journey of progress over time.

Nevertheless, while I may like marking, I don’t always like the time it takes. As I write the same comment on the fourteenth piece of work from a class, I find myself thinking that this probably wasn’t the best use of my time. As Anthony Radice wrote in this post Whole Class Feedback: A Winner All Round, it’s important for teachers to consider what else we could be doing with the time we spend in close marking like this, and whether other activities, such as planning or creating resources, might be more useful in helping pupils make progress.

With all this in mind, I agreed with my line manager that development of whole class feedback would be an objective for my performance review this year. I’ll be developing my practice in class and feeding back to the departmental team.

When and what to mark

I have decided to focus on year 8 as I have three mixed ability computing classes in this year group.

There are several types of task that these classes do:

  1. Work in class which will be directed to an element of a unit, for example editing sound files in a unit on podcasts, or the use of subroutines in a unit on algorithms.
  2. Half-termly Homework. In computing pupils choose a task for each half of each term. This is an individual project they work on for several weeks. Examples include designing a website on a theme, or designing a revision resource for a topic. Pupils work on different tasks.
  3. Discrete homework. These are shorter homework tasks, taking a few minutes, for example reinforcing key vocabulary, or a quiz on PEGI game ratings. They are set one lesson for completion by the next. The tasks may be differentiated, but everyone is doing the same thing.

I think some of this work lends itself more to whole class feedback. In class we are usually all working towards the same goals. It’s easy for me to pick up on good examples and also to spot errors or misconceptions. In class it makes sense to give verbal feedback to the class (as well as taking opportunities to talk to individuals. The written feedback is for myself: picking up on what happens in the lesson to better inform my teaching.

Pupils put a lot of work into the half-termly homework and I think they deserve some individual feedback from me. What I’m aiming to work on is making that feedback truly individual. Rather than repeating comments on common themes, though, I intend to note these and address them as feedback to the class.

The discrete homework tasks are usually self-marking tasks such as quizzes, so my focus is usually in picking up on what the scores mean, such as a misunderstanding of a particular concept. Often I will revisit this on teaching, rather than give specific feedback on the homework, but I’ll see if doing so is more effective.

So, that sets the scene for what I plan to do:

  • Continue to use verbal in-lesson feedback as I do already, but keep better track myself of how it informs my teaching.
  • Restrict individual feedback for the truly individual elements of homework projects and add whole class feedback of common learning points.
  • Give whole class feedback a try for discrete tasks, where previously I might have just revisited the learning in the course of a lesson.

I’ll make sure to feedback how We get on!

Image: publicdomainpictures.net

Computing, creativity and cheating

Creativity and coding

I believe that creativity is at the heart of computing. A couple of years ago I marked the passing of the creator of Logo, Seymour Papert with this post on his legacy. He created and promoted that computer language to foster creativity in students. The focus on creativity also drives many of the current generation of educational developers. Scratch, a free online scripting language allows all users, most of whom are children, to create and share stories, games, and animations. Created in 2007, Scratch now has more than 4.3 million users worldwide, mainly between the ages of 8 and 18, and nearly 7 million projects. It is used widely in UK schools and is many children’s first experience of scripting code instructions. Creativity is also a driving feature behind other computing innovations commonly used in UK schools such as the coding language Python, the Raspberry Pi and BBC Microbit.

Problems with assessment

If we accept this central role of creativity, it follows that the assessment of computational thinking, and its practical output as novel solutions to coding problems, must take account of this. Unfortunately, in recent years the assessment of GCSE Computer Science coursework has been bedevilled by the appearance of programming solutions to the set problems on the internet. This has forced the exams regulator, Ofqual, to remove this element from the assessment. The current situation is that a programming task forms part of the course, but marks do not form part of the assessment, which is therefore based solely on terminal exam papers. Unfortunately this is an issue that occurs not just at GCSE, but at all levels of education.

Ofqual consultation

Ofqual are currently consulting on this issue for exams from 2020 onwards through a consultation document on the future of assessment for GCSE Computer Science. You can respond to the consultation document here.

I think that they have thought carefully about the pros and cons about different methods of assessment. I am disappointed, however, that there is not more explicit mention of creativity in Computer Science. Ofqual make a comparison with other subjects with a coursework element, such as design technology, but this seems to be in consideration of practical skills which, while important, are not the whole picture. I feel that what is missing is the role of creativity in the elements and practice of computational thinking.

Nevertheless, I think Ofqual have left the door open to a solution that will allow students to demonstrate creativity in their thinking. In enabling exam boards to issue pre-release material to candidates (in a similar way to creative subjects such as art), there is scope for students to think and prepare for a creative response to a particular context, without the details of the specific task being revealed. I hope that in the future, developments in technology will mean that creative computational thinking can be securely assessed in a way that more closely mirrors the reality of programming than the exam hall.

The consultation closes at 4pm on Monday 3rd December 2018. I would urge anyone involved in teaching computing to take some time to make a response.

Image: Pixabay