Putting 3D research into the Grenville Shop

This was originally published on the British Museum blog as “A new dimension in home shopping

Over the last four years, the British Museum has been producing 3D models that can be viewed by anyone online. The roots of this work can be traced back to the Arts and Humanities Research Council‘s funding of the MicroPasts crowdsourcing project with University College London, which produced 3D objects for academic research. The Museum continued its 3D output by using native mobile phone applications to publish a selection of objects on the Sketchfab platform, with many available to download under a Creative Commons Non-Commercial licence.

All of this work was based around the principles of Open Science and the premise of being cheap, quick and easy to replicate for anyone. We used a technique called photogrammetry (multiple photographs taken in a strategic pattern around the object) and the resulting output was rendered in 3D software. Ideally anyone, an individual or institution, should be able to replicate our methods to create 3D representations of archaeology or artworks.

Many have questioned the worth of these 3D models – what value do they add? My answer is that they are a natural extension of museum object documentation. There is a clear progression from line drawing to photography, and now to 3D representations which can be audio described, annotated, reused and embedded. There is also the potential for them to be monetised, which could potentially create a valuable income stream to fund some of the Museum’s work.

We have identified and tested several paths, including Virtual Reality (VR) experiences, working with the gaming and other creative industries. However, one of  the most exciting ones was working in partnership with the British Museum Company – the Museum’s commercial arm.

Room 3
by The British Museum
on Sketchfab

Together we discussed the concept and process of creating facsimiles for sale in the British Museum shops, both online and on site. We had previously worked with Oxfordshire-based 3D company ThinkSee3D, who had provided replicas for our successful Asahi Shimbun Displays Creating an ancestor: the Jericho Skull, Containing the divine: a sculpture of the Pacific god A’a and Moving stories: three journeys. We worked in partnership with ThinkSee3D using a series of new techniques to produce high quality items for sale in the Museum’s Grenville Shop. They were produced directly from models created in-house and drawn from the collection of models on Sketchfab.

It quickly became apparent that printing models in plastics would not be very environmentally friendly and that gypsum prints would be too costly, so we decided to use a method of casting directly from a mould derived from a 3D model.

ThinkSee3D have now developed a range of products for sale, starting with the Statue of Roy, Priest of Amun (shown above as a 3D capture from which the mould was created). It has been cast from reusable moulds in Jesmonite (a water-based resin commonly used in museums), with the potential to produce in the material of your choice – bronze, clear resins, or even chocolate! You can now buy a resin replica of the statue of Roy from the Museum’s Grenville Shop or online for £300.

Cultural Heritage Spotlight: Q&A with Daniel Pett from the British Museum (Part 1)

This was originally published as part 1 of a 3 part interview on the SketchFab blog in January 2017.

Our Cultural institutions Page highlights our ongoing support of museums and cultural institutions with free accounts and access to tools. In Cultural Heritage Spotlight, we’ll explore museums and cultural institutions who are using 3D technology to bring new life to their collections. Today’s blog post features Daniel Pett’s effort to make the collections of the British Museum accessible for anyone in 3D and VR.

Daniel Pett is a Senior Digital Humanities Manager at the British Museum. He has a background in Archaeology, having studied at the Institute of Archaeology (UCL) and Cambridge University and he has also worked in Telecoms and Investment Banking technology and subsequently as technical lead for the Portable Antiquities Scheme. He has also been co-lead on the MicroPasts project with Professor Andy Bevan (UCL) and now leads on the British Museum’s foray into the world of Digital Humanities, sitting in between the curatorial community of the Museum and the Digital and Publishing department. One of the most recent projects he delivered, was the new Knowledge Search application for the Museum, which brings together many of the Museum’s resources in one interface. 

photo

As a side-project he has co-created one of the most breathtaking and largest 3D/VR collection of cultural artefacts in the world. The British Museum was early on Sketchfab, they created their account in October 2014: 121 3D models later, with more than 380K views on their models and 3000 likes, they are now the most followed Museum on Sketchfab. The Jericho Skull has been featured by CNN and the National Geographic:

The first foray into making British Museum 3D content available on Sketchfab was through the MicroPasts project, using crowdsourced photo masking and subsequently Thomas Flynn placed his models online under the BM banner. Daniel now shares the British Museum’s collections with the entire world making them easily accessible for educational purposes, scientific reasons and, of course, for anyone who is interested in Culture. Thanks to his initiative and knowledge transfer to colleagues, the British Museum helps democratizing culture and digitally preserve their collections.

Sharing their artefacts is also a way to promote the British Museum’s collections and encourage to actually go in the museum to discover them. It can also be a way to show the hidden artefacts since not all the collections of the museum are shown in the Museum or an easy way to manipulate fragile or very small/big artefacts.

Daniel will explain us today how he has been able to achieve all of this with a restricted budget and a short time allocation!

Daniel, thanks for answering our questions:

First, could you explain what is your process to create all these models? 

Our 3D work is all based around photogrammetry or Structure from Motion and builds on work first done in the Museum by Southampton University on Hoa Hakananai’a, then by the MicroPasts team and collaborators and finally by Thomas Flynn. The BM has seen other 3D scanning techniques employed, mostly via medical imaging (led by curator Daniel Antoine) and more costly methods but also some LIDAR usage and the work that CyArk conducted on our Assyrian reliefs. The famous archaeologist, Dominic Powlesland has also done 3D work on BM collections data (see following 3D Cremation Urn) and we’ve had collaborative work with students and academics since:

Our output is generally done so that anyone can replicate what we can do and this is what has found its way onto Sketchfab. Our basic process for capturing a sculpture in gallery for instance is this (and bear in mind we generally can only do this in opening hours and under gallery lighting):

  1. Find appropriate sculpture, usually on morning walk through gallery to office.
  2. Take photos at 5-10 degree intervals at low, mid and high levels
  3. To process the models, I generally use Agisoft’s Photoscan Pro (which was provided during the AHRC funded MicroPasts project) and Thomas Flynn has experimented with a variety of software.
  4. If complex, we deploy the photographs to MicroPasts for photomasking by our crowd contributors (any museum can use this facility, documentation on how to do this exists)
  5. Import masks into PhotoScan
  6. Align photos
  7. Build dense cloud
  8. Build mesh
  9. Build texture
  10. Upload models, masks, photoscan files and images to Github and obtain a DOI for the 3D capture
  11. Upload to Sketchfab under the license that the Museum allows (Creative Commons By Attribution Non-Commercial Share-Alike) under its interpretation of the Public Sector Information Act.
  12. Ask a curator if they will annotate the model and encode their knowledge for others to enjoy.

Some 3D models have been done in collaboration with other institutions. For example, the very recent Netsuke models have been produced by the Arts Research Center at Ristumeikan University, Japan:

Is it costly (in time, money, equipment, etc.)?

The biggest cost for all activities is time. We’re all time poor! We use very basic equipment for example mobile phones with decent sensors, low end digital SLRs and reasonable compact cameras allied with lazy susan turntables and tripods. At the moment, I’m just using my OnePlus3 mobile as my 1 year old daughter broke my DSLR when she pulled it off the table at home.

img_20160524_154046

We have had no budget set aside (so far) for this work and 3D imagery is usually captured when the opportunity arises or as we walk through the galleries to work and the light is okay! The equipment I use is my own and most of the processing and masking is done in my own time (either on the commute to work or batched overnight). I use a MacBook Pro with 16GB RAM or an iMac with 20GB RAM. My work PC cannot handle the load! My colleagues (Jennifer Wexler, Andy Bevan, Chiara Bonacchi, Thomas Flynn and Adi Keinan-Schoonbaert) have access to a few machines of different types, and I am unsure what our contributors to MicroPasts used (for example Hugh Fiske who now produces great content for DigVentures).

What has been the turning point that made you think “we need to digitize our collection”?

The Museum has been digitising its records for many years now, building on Antiquarian and more recent analogue methods and the creation of 3D is just an extension of this process following the adoption of photography. What I want to achieve is a total record for museum objects (but as we have around 8 million objects and many unsuitable for 3D – for example in copyright works of art, culturally sensitive objects and extremely shiny metals) this will not be practicable.

So for the ones we can capture we have curatorial interpretation (the work that Neil Wilkin did on monumental dirks for instance – see The Beaune Dirk 3D model), multiple images, factual data, a 3D model that can be annotated and a wide variety of data points that can be linked off to other information sources.

No museum, apart from maybe the Smithsonian is making 3D data work effectively in their resource yet. I want to see our 3D work being not only a research tool, but a revenue creator, a knowledge sharing device and a way of allowing serendipitous reuse of Museum content. For example, we could make boxes of British Museum chocolates direct from the collection on demand (I’ve experimented with silicon moulds and 90% dark chocolate, the evidence has been eaten); we could make concrete casts of the Molossian Hound (one of my favourite pieces of sculpture in our collection) from the model on Sketchfab and see them in garden centres worldwide; we could see the shop not having to keep replicas in stock, but instead printing on demand; we could see manufacturers buying a license to produce mass replicas of BM content and the museum taking royalties.

Some of the things I’ve seen BM content being used for are really inspiring, for example Robert Kaleta’s PhD work at UCL, the fantastic Paul Mellon funded Digital Pilgrim from Amy Jeffs and Lloyd de Beer, or the Museum in a Box project that George Oates and Thomas Flynn run. I would ideally like to take the opportunity to scan any new acquisition (see for example the Virgin and Child) or objects that are going on loan for long periods (see for example the Ancient Lives collection) which allows the public to still see them in detail even if they cannot view the real thing.

We’re also seeing our 3D work propagating onto the museum floor as handling objects (for example the Egyptian house for the Sunken Cities exhibition handling desk), as information points in gallery (for example the Jericho Skull, statue of A’a and the Kakiemon ‘Boy on a Go board’) and in VR work that the museum has done on the Bronze Age and African Rock Art. For all of these bits of work, we’ve had supportive curatorial staff who generally have been enthused by chat in the canteen at lunch.

The use of 3D is now making things possible that the 2D representation cannot, but I do not believe Adrian Hon’s assertion that VR will break the museum. 3D has the potential to augment, enhance and improve the museum experience. Other areas where we’ve started to use 3D models and printing is within exhibition design, for example instead of card board mockups of display spaces we can now print directly from CAD models saving our designers lots of time.

Thanks again for sharing, Daniel! Stay tuned for Part2 of this Q&A next week.

If you are part of a cultural institution, get in touch with us at museums@sketchfab.com to set up your free business account.

Google Search Appliance – British Museum install

Working with Extended Content Solutions, I have been project/product managing the new ‘Knowledge Search tool’ for the British Museum. Built using AngularJS on ECS’s proprietary software, it pulls together multiple data sources through the use of Google Search Appliance.

This is one of the first products I have worked on, which isn’t open source.

Preparing the British Museum Bronze Age index for transcription

Originally published at: http://research.micropasts.org/2014/04/30/preparing-the-index/

Since late 2013, the MicroPasts team has been preparing the British Museum‘s (BM) Bronze Age Index to be the first offering on our crowd-sourcing platform. This corpus consists of around 30,000 (roughly A4 sized) cards (holding information going back to as early as 1913).  The majority of these are double sided and generally have text on the front and a line drawing on the reverse (there are many variants that have been discovered, such as large fold out shield plans.)

MicroPasts · Application  British Museum Bronze Age Index Drawer B16 · Contribute
The Crowd sourcing platform

Over the last few years, several curators have mooted exercises (Ben Roberts, now at Durham University attempted to turn the transcription into an AHRC funded collaborative Doctoral Award) to turn this amazing resource into a digital archive, but this had not come to fruition until the advent of the MicroPasts project. Internal discussions had been raging on how best to deal with these cards for a number of years, and it was felt that this project could perhaps be the ideal solution and provide museum and public interaction of a new type, which the BM had not explored previously.

To enable this corpus to be digitised is reasonably straight forward and we have employed Dr Jennifer Wexler (@jwexler on Twitter) to manage the scanning process, and she has been doing this since February after her return from field work in Benin.

The equipment needed for this is relatively straight forward, the BM has acquired two high capacity/speed scanners (Canon) which can scan 60 and 100 sheets per minute at 600 dpi and once this initial project is over, they can be reused for turning more archival materials into potential crowd sourcing materials. You can see a picture of Neil’s former office (he’s just moved to a nicer one -we’re not jealous) being used as the scanning centre below in one of his tweets:

The first drawer scanned is known as A9 (this application on the platform), and this was done by the Bronze Age Curator Neil Wilkin (@nwilkinBM on Twitter) over a few weeks whilst dispensing with his other duties. Once Jennifer returned, scanning started in earnest! These high resolution images were then stored in various places to facilitate good data preservation (on an external 4TB hard drive, the Portable Antiquities Scheme server cluster and onto Amazon S3) and they were then stitched together by Daniel Pett (@portableant on Twitter), as composite images using a simple python script and then uploaded to Flickr (for example see this set) for the crowd-sourcing platform to access and then present them as tasks for our audience to assist with. All of these images have been released under the most liberal licence that Flickr permits (we would have ideally liked to make them CC0, but this option does not exist) and so they are served up under a CC-BY licence. The data that will be transcribed, will also be made available for download and reuse by anyone, under a CC0 licence. The embedded tweet below, shows an example of one of the stitched cards:

The platform that we’re using for serving up the crowd sourcing tasks has been created by Daniel Lombraña González (lead developer – @teleyinex  on Twitter) and the Pybossa team, and it is a departure from the usual technology stack that the project team has used previously. Installation of the platform is straightforward and it was deployed on to Portable Antiquities Scheme hardware in around 15 minutes. We then employed Daniel to assist with building the transcription application skeleton (in conjunction with project lead Andy Bevan (not on Twitter!) and Daniel Pett) that would be used for each drawer, whilst we also developed our own look and feel to give MicroPasts some visual identity. If you’re interested, the code is available on GitHub and if you have suggestions from improvements, you could either fork the code or comment on our community forum.

Originally published at: http://research.micropasts.org/2014/04/30/preparing-the-index/

For the last few months, building up to launch, lots of debugging and user testing was conducted to see how the site reacted, whether the tasks we offered were feasible and interesting enough. Chiara Bonacchi (@Chiara_Bonacchi) and Adi Keinan (@Adi_Keinan) worked on the main project site, building our Facebook and Twitter engagement.

Chiara has also developed our evaluation frameworks, which we were integrating into the system and feel are vital to discovering more about people’s engagement with our platforms and how their motivations progress through time, and hopefully the project’s success! This evaluative work hopes to be one of the first following the development of individual users’ interaction on a crowd-sourcing website.

And then we launched and tasks are ongoing:

This project is very exciting for the BM and especially for our curatorial staff. It could unlock new opportunities and Neil sums up very succinctly, why we are doing this public archaeology project, so we’ll leave it to him:

Thank you for participating!

Lost Change: mapping coins from the Portable Antiquities Scheme

Today sees the launch of Lost Change, an innovative and experimental application that allows coins found within England and Wales and recorded through the British Museum’s Portable Antiquities Scheme (PAS), to be visualised on an interactive, dual-mapping interface. This tool enables people to interrogate a huge dataset (over 300,000 coin records can be manipulated) and discover links between coins’ place of origin (the issuing mint or a more vague attribution if this location is uncertain) and where they were discovered and then subsequently reported to the PAS Finds Liaison Officers.

While much of the the data is made available for re-use on the PAS website under a Creative Commons licence, some details are closely guarded to prevent illicit activity (for example night-hawking or detecting without landowner permission) and so this application has been developed with these restrictions in mind. An object’s coordinates are only mapped to an Ordnance Survey four-figure National Grid Reference (which equates to a point within a 1km square), and only if the landowner or finder has not requested these to be hidden from the public.

The distribution of coins is biased by a number of factors (a project funded by the Leverhulme Trust is looking at this in greater depth) which could include:

  • Whether metal detecting is permitted by the landowner, or the topography makes detecting difficult
  • Soil type and land use
  • Whether there is an active community of metal detectorists within the vicinity

544x306

The tool is straightforward to use. The left hand pane holds details for the place of discovery; the right hand side holds details for the place of issue, the mint. These panes work in tandem, with data dynamically updating in each, depending on the user’s choice. A simple example to get going is this:

  • Click on “Iron Age” within the list of periods
  • Within the right hand pane, click on one of the three circular representations and this will highlight where the coins from this mint were found in the left hand pane. The larger the circular representation, the more coins from that mint have been recorded.
  • If one clicks on any of the dots within the left hand pane, these are selected and an overlay in the right hand pane allows dynamic searching of the PAS database.

The PAS intends to build on this project at a later stage and will be seeking further funding to enable this to happen, with many more facets of discovery available to query the dataset.

Lost Change was funded through a £5,000 grant from the CreativeWorks London ‘Entrepreneur-in-Residence’ programme.

The PAS is grateful to Gavin Baily and Sarah Bagshaw from Tracemedia who developed the application, and everyone who has contributed to the PAS database.

If you have any feedback on the project, please contact the PAS viainfo@finds.org.uk.

This originally appeared on the British Museum blog

Yahoo! Openhack EU (Bucharest)

A pair of history enthusiastsLast weekend, I was invited to attend the Yahoo! Openhack EU event that was held in Bucharest, Romania as part of a team of “History Enthusiasts” to try and help participants generate ideas using cultural sector data. This came about from the really successful History Hack Day that Matt Patterson organised earlier this year and due to this, Yahoo!’s Murray Rowan invited him to assemble a team to go to Romania and evangelise. Our team comprised myself, Jo Pugh from the National Archives and our leader Matt; we went armed with the datasets that were made available for the hackday and a list of apis from Mia Ridge (formerly of the Science Museum and now pursuing a PhD).

The Openhack event (hosted in the Crystal Palace Ballrooms – don’t leave the complex we were told, the wild dogs will get you!) started with a load of tech talks, most interesting for me was the YQL one (to see how things had progressed), Douglas Crockford (watched this on video later) on JSON and also Ted Drake‘s accessibility seminar. One thing I thought that was absent was the Geo element, something that is extremely strong at Yahoo! (api wise before you moan about maps) and an element that always features strongly at hack days in mashups or hacks. Our team then gave a series of short presentations to the Romanians who were interested in our data unfortunately not too many, but that seemed to be the norm for the enthusiasts. We felt that a lot of people had already come with ideas and were using the day as a collaborative catalyst to present their work, not that this is a bad thing, be prepared and your work will be more focused at these events. Between us we talked about the success of the hackday at the Guardian and Jo presented material from the National Archives and then we discussed ideas with various people throughout the day; for example:

  1. Accessing shipping data – one of the teams we spoke to wanted some quite specific data about routes. However, we found a static html site with a huge amount of detail and suggested scraping and then text extraction for entities mentioned and producing a mashup based on this – see submarines hack
  2. How to use Google Earth time slider to get some satellite imagery for certain points in time (the deforestation project was after this)
  3. Where you can access museum type information – history hack days list
  4. Which apis they could use – Mia Ridge’s wiki list

I tried to do a few things whilst there, some Twitter analysis with Gephi and R (laptop not playing ball with this) and building some YQL opentables for Alchemy’s text extraction apis and Open Library (I’ll upload these when tested properly). Matt looked at trying to either build a JSON api or a mobile based application for Anna Powell-Smith‘s excellent Domesday mapping project (django code base) and Jo played with his data for Papal bullae from the National Archives using Google’s fusion tables and also looking at patterns within the  syntax via IBM’s Manyeyes tool.

Ursus blackHacking then progressed for the next 24 hours, interspersed with meals and  some entertainment provided by the Algorythmics (see the embedded video below from Ted Drake) who danced a bubble sort in Romanian folk style, and 2 brief interludes to watch the Eurovision (Blue and the Romanian entry). We retired to the bar at the JW Marriot for a few Ursus beers and then back to the Ibis for the night before returning the next day to see what people had produced to wow their fellow hackers and a panel of judges. Unfortunately, I had to head back to the UK (to help run the ARCN CASPAR conference) from OTP when the hacks were being presented, so I didn’t get to see the finished products. The internet noise reveals some awesome work and a few that I liked the sound of are commented on below. I also archived off all the twitter chat using the #openhackeu hashtag if anyone would like these (currently over 1700 tweets). There was also some brilliant live blogging by a very nice chap called Alex Palcuie, which gives you a good idea of how the day progressed.

So, after reading through the hacks list, these are my favourites:

  1. Mood music
  2. The Yahoo! Farm – robotics and web technology meshed, awesome
  3. Face off (concept seems good)
  4. Pandemic alert – uses webgl (only chrome?)
  5. Where’s tweety

And these are the actual winners (there was also a proper ‘hack’, which wasn’t really in the vein of the competition as laid out on the first day, but shows skill!):

  • Best Product Enhancement – TheBatMail
  • Hack for Social Good – Map of Deforested Areas in Romania
  • Best Yahoo! Search BOSS Hack – Take a hike
  • Best Local Hack –  Tourist Guide
  • Hacker’s choice – Yahoo farm
  • Best Messenger Hack – Yahoo Social Programming
  • Best Mashup – YMotion
  • Best hacker in show – Alexandru Badiu, he built 3 hacks in 24 hours!

To conclude, Murray Rowan and Anil Patel‘s team produced a fantastic event which for once had a very high proportion of women (maybe 10-25% of an audience of over 300) in attendance – which will please many of the people I know via Twitter in the UK and beyond. We met some great characters (like Bogdan Iordache) and saw the second biggest building on the planet (it was the biggest on 9/11 the taxi drivers proudly claim) and met a journalist I never want to meet again….. According to the hackday write up, 1090 cans of Red Bull, 115 litres of Pepsi and 55 lbs of coffee were consumed (and a hell of a lot of food seeing some of the food mountains that went past!)

Here’s to the next one. Maybe a cultural institution can set a specific challenge to be cracked at this. And I leave you with Ted Drake‘s video:

Archiving twitter via open source software

Over the last few months I’ve been helping Lorna Richardson, PhD student at the Centre for Digital Humanities at UCLThe Twitter logo from their official set. Her research is centred around the use of Twitter and social media by archaeologists and others who have an interest in the subject. I’ve been using the platform for around 3 years (starting in January 2008) and I’ve been collecting data via several methods for several reasons; for a backup of what I have said, to analyse the retweeting of what I’ve said and to see what I’ve passed on. To do this, I’ve been using several different open source software packages. These are Thinkupapp, Twapperkeeper (open source own install) and Tweetnest. Below, I’ll run through how I’ve found these platforms and what problems I’ve had getting them to run. I won’t go into the Twitter terms and conditions conversation and how it has affected academic research, just be aware of it…..

Just so you know the server environment that I’m running all this on is as follows, the Portable Antiquities Scheme‘s dedicated Dell  machine located at the excellent Dedipower facility in Reading, running a Linux O/S (Ubuntu server), Apache 2, PHP 5.2.4, MySql 5.04 and with the following mods that you might find useful curl, gd, imagemagick, exif, json and simplexml. I have root access, so I can pretty much do what I want (as long as I know what I’m doing, but Google teaches me what I need to know!) To install these software packages you don’t need to know too much about programming or server admin unless you want to customise scripts etc for your own use (I did….) You can probably install all this stuff onto Amazon cloud based services if you can be bothered. I’ve no doubt made some mistakes below, so correct me if I am wrong!

Several factors that you must remember with Twitter:

  1. The system only lets you retrieve 3200 of your tweets. If you chatter a lot like Mar Dixon or Janet Davis, you’ll never get your archive 🙂 Follow them though, they have interesting things to say….
  2. Search only goes back 7 days (pretty useless, hey what!)
  3. Twitter change their T&C, so what is below might be banned under these in the future!
  4. Thinkuppapp and Twapperkeeper use oauth to connect your Twitter account so that no passwords are compromised.
  5. You’ll need to set up your twitter account with application settings – secrets and tokens are the magic here – to do this go to https://dev.twitter.com/apps and register a new app and follow the steps that are outlined in the documentation for each app (if you run a blog and have connected your twitter account, this is old hat!)

Tweetnest

Tweetnest is open source software from Andy Graulund at Pongsocket. This is the most lightweight of the software that I’ve been using. It provides a basic archive of your own tweets, no responses or conversation threading, but it does allow for customisation of the interface via editing of the config file. Installing this is pretty simple, you need a server with PHP 5.2 or greater and also the JSON extension. You don’t need to be the owner of the Twitter account to mine the Tweets, but each install can only handle one person’s archive. You could have an install for multiple members of your team, if you wanted to…..

Source code is available on github and the code is pretty easy to hack around if you are that way inclined. The interface also allows for basic graphs of when you tweeted, search of your tweet stream and has .htaccess protection of the update tweets functionality (or you can cron job if you know how to do this.) My instance of this can be found at http://finds.org.uk/tweetnest Below are a few screen shots of the interfaces and updating functions. The only issue I had with installing this was changing the rewriteBase directive due to other things I am up to.

Tweet update interface
Tweet update interface
Monthly archive of tweets
Monthly archive of tweets

Thinkupapp

Thinkupapp has been through a couple of name changes since I first started to use it (I think it was Thinktank when I first started), and has been updated regularly with new β releases and patches released frequently. I know of a couple of other people in the heritage sector that use this software (Tom Goskar at Wessex and Seb Chan of Sydney’s Powerhouse Museum mentioned he was using it this morning on Twitter.)

This is originally a project by Gina Trapani (started in 2009), and it now has a group of contributors who enhance the software via github and is labelled as an Expertlabs project and is used by the Whitehouse (they had impressive results around the time of the State of the Union speech). This open source platform allows you to archive your tweets (again within the limits) and their responses, retweets and conversations (it also has a bonus of being able to mine Facebook for pages or your own data and it can have multiple user accounts). It also has graphical interfaces that allow you to visualise how many followers you have gathered over time, number of tweets, geo coding of tweets onto a map (you’ll need an api key for googlemaps), export to excel friendly format and search facility. You can also publish your tweets out onto your own site or blog via the api and the system will also allow you to view images and links that your virtual (or maybe real) friends have published on their stream of conciousness. You can also turn on or off the ability for other users to register on your instance and have multiple people archiving their Tweet stream.

This is slightly trickier than tweetnest to install, but anyone can manage this if they follow the good instructions and if you run into problems read their google group. One thing that might present as an issue if you have a large amount of tweets is a memory error – solve this by setting ini_set(‘memory_limit’,’32M’); in the config file that throws this exception, or you might time out as a script takes longer than 30 seconds to run. Again this can be solved by adding set_time_limit ( 500 );  to your config file. Other things that went wrong on my install included the SQL upgrades (but you can do these manually via phpmyadmin or terminal if you are confident) or the Twitter api error count needed to be increased. All easy to solve.

Things that I would have preferred on this are clean urls from mod_rewrite as an option and that maybe it was coded using one of the major frameworks like Symfony or Zend. No big deal though. Maybe there will also be a solr type search interface at some point as well, but as it is open source, fork it and create plugins like this visualisation.

You can see my public instance at http://finds.org.uk/social and there’s some screen shots of interfaces below.

My thinkup app at finds.org.uk
My thinkup app at finds.org.uk
Staffordshire hoard retweets
Staffordshire hoard retweets

Embed interface
Script to embed your tweet thread into another application

Graphs of followers etc
Graphs of followers etc

Twapperkeeper

The Twapperkeeper archiving system has been around for a while now, and has been widely used to archive hashtags from conferences and events. Out of the software that I’ve been using, this is the ugliest, but perhaps the most useful for trend analysis. However, it has recently fallen foul of the changes in Twitter’s T&C, so the functionality of the original site has had the really useful features expunged – namely data export for analysis. However, the creator of this excellent software created an opensource version you can download and install on your own instance; this has been called yourTwapperkeeper. I’ve set this up for the Day of Archaeology project and added a variety of hashtags to the instance so that we can monitor what is going on around the day (I won’t be sharing this url I am afraid….) Code for this can be downloaded from the Google code repository and again this is an easy install and you just need to follow the instructions. Important things to remember here include setting up the admin users and who is allowed t0 register archives, working out whether you want to associate this with your primary account in case you get pinged for violation of the terms of service, setting up your account with the correct tokens etc by registering your app with twitter in the first place.

Once everything is set up, and you start the crawler process, your archive will begin to fill with tweets (from the date at which archiving started) and you can filter texts for retweets, dates created, terms etc. With your own install of twapperkeeper, you can still export data, but at your own risk so be warned!