Putting 3D research into the Grenville Shop

This was originally published on the British Museum blog as “A new dimension in home shopping

Over the last four years, the British Museum has been producing 3D models that can be viewed by anyone online. The roots of this work can be traced back to the Arts and Humanities Research Council‘s funding of the MicroPasts crowdsourcing project with University College London, which produced 3D objects for academic research. The Museum continued its 3D output by using native mobile phone applications to publish a selection of objects on the Sketchfab platform, with many available to download under a Creative Commons Non-Commercial licence.

All of this work was based around the principles of Open Science and the premise of being cheap, quick and easy to replicate for anyone. We used a technique called photogrammetry (multiple photographs taken in a strategic pattern around the object) and the resulting output was rendered in 3D software. Ideally anyone, an individual or institution, should be able to replicate our methods to create 3D representations of archaeology or artworks.

Many have questioned the worth of these 3D models – what value do they add? My answer is that they are a natural extension of museum object documentation. There is a clear progression from line drawing to photography, and now to 3D representations which can be audio described, annotated, reused and embedded. There is also the potential for them to be monetised, which could potentially create a valuable income stream to fund some of the Museum’s work.

We have identified and tested several paths, including Virtual Reality (VR) experiences, working with the gaming and other creative industries. However, one of  the most exciting ones was working in partnership with the British Museum Company – the Museum’s commercial arm.

Room 3
by The British Museum
on Sketchfab

Together we discussed the concept and process of creating facsimiles for sale in the British Museum shops, both online and on site. We had previously worked with Oxfordshire-based 3D company ThinkSee3D, who had provided replicas for our successful Asahi Shimbun Displays Creating an ancestor: the Jericho Skull, Containing the divine: a sculpture of the Pacific god A’a and Moving stories: three journeys. We worked in partnership with ThinkSee3D using a series of new techniques to produce high quality items for sale in the Museum’s Grenville Shop. They were produced directly from models created in-house and drawn from the collection of models on Sketchfab.

It quickly became apparent that printing models in plastics would not be very environmentally friendly and that gypsum prints would be too costly, so we decided to use a method of casting directly from a mould derived from a 3D model.

ThinkSee3D have now developed a range of products for sale, starting with the Statue of Roy, Priest of Amun (shown above as a 3D capture from which the mould was created). It has been cast from reusable moulds in Jesmonite (a water-based resin commonly used in museums), with the potential to produce in the material of your choice – bronze, clear resins, or even chocolate! You can now buy a resin replica of the statue of Roy from the Museum’s Grenville Shop or online for £300.

Cultural Heritage Spotlight: Q&A with Daniel Pett from the British Museum (Part 3)

Our Cultural institutions Page highlights our ongoing support of museums and cultural institutions with free accounts and access to tools. In Cultural Heritage Spotlight, we’ll explore museums and cultural institutions who are using 3D technology to bring new life to their collections. Today’s blog post features Daniel Pett’s effort to make the collections of the British Museum accessible for anyone in 3D and VR.

In the last part of Daniel Pett question and answer, the Senior Digital Humanities Manager at the British Museum gives feedback about people’s reaction to this new medium, and give his thought on next steps for 3D and VR.

8/ How have the reactions of the audience and the British Museum team been to this new medium?

Audience reaction has been quite interesting, but not really researched (there’s a Master’s thesis here if anyone is interested). There was the initial surge in interest and massive view counts for the very first models that were released (which looking back at with Thomas Flynn, we have decided all need redoing), but generally we have seen a long tail model of interest. The majority of models have very low views and we haven’t got great upsurge in usage even when associated news stories make the mass media (for instance the Jericho Skull or the Virgin and Child acquisition). Both of the articles cited here could have used our 3D content live and embedded, but didn’t. An opportunity lost.

Amongst colleagues, I think there is still a view that 3D is a nice thing, but still a bit of a luxury and comes with the question what does it achieve. For instance, have a look at commentary by Mark Carnall on Twitter for instance, when it comes to 3D, I think he’s spot on, and educated skeptic! Unless 3D content actually works in more ways than just a spinning object, what’s the point?

Ideally I would like far more exposure for our 3D output on our various interaction platforms – higher volume of embeds on Facebook and Twitter, more joined up usage of models when objects are mentioned in newspaper stories (we should push the use of our models via embeds as a way of enhancing content.) Of course there are a variety of tensions in the way, winning places within the social media schedule is not easy when the BM is so multifaceted; there is always something going on.

In many ways, our 3D models can push towards the Create Once, Publish Everywhere model that is often discussed at museum conferences.

9/ Do you wish to reach a particular audience with these models or do you think it is aim to everyone?

Like all of our content that we produce (analogue and digital) there’s always the tension over audience consumption. We’re now doing far more user research on what we produce in the museum context, but this has not yet turned to the 3D output, so we perhaps do not have the best metrics and analysis to inform the decision makers yet. The number of followers we have is still tiny in the grand scheme of things – and the same could be said for all our social media platforms – until we reach millions we’re not spreading the gospel of museums far enough.

For me, the 3D content we produce has multiple audience levels that can be seen as the consumer; the specialist, the browser, the serendipitous stumbler. As I mentioned earlier, I’m most excited about seeing how people reuse our content, by making our models downloadable (even under non-commercial terms) it allows others to build on our work.

Archaeology has generally been ahead of museums and digital humanities in most computational techniques for a long time now, and there’s some brilliant exemplars out there for inspiration. Take for example Professor Linda Hurcombe’s project and her suggestions of replacing objects taken off display with white prints. Just a quick search on a search engine will bring up loads of archaeological research projects that will make you think.

10/ Could you imagine what is the next step for the British Museum regarding 3D and VR?

I don’t think I need to imagine 3D/VR steps for the British Museum, things are happening right now and some projects have been deployed already. We’ve had 2 successful small scale VR/360 immersive projects that were created with Soluis Heritage; the Bronze Age Round House and most recently the African Rock Art project’s ‘Game Pass Shelter’ app for iOS and Android. There has also been some experimentation in house with the use of Unity to create environments and this may continue and there’s some large scale (in terms of object size, not project staff) scanning currently underway within our America, Oceania and Africa department led by Jago Cooper.  Some of my colleagues have previously worked with Gamar on the Gift for Athena app and we have experimented with the use of Augment for gallery text labels and bringing more content to the viewer.

There’s some fantastic 3D content coming soon via collaborative project work with the Art Research Centre from Ritsumeikan University on a small part of the Museum’s fabulous Netsuke model collection, and annotated archaeological site representation from the Amara West team led by Neal Spencer.

Bringing more people on board, devolving responsibility for asset creation – that is the next logical step as our crowdsourcing efforts have shown. If you look at Scan the World’s British Museum collection, they have produced far more models than we can achieve with a very small team. So over the last few months, between Tom Flynn and myself, we have taught museum curators and colleagues how to use our workflow to create our own models – Marcel Maree is capturing statues of Sekhmet, Jamie Fraser has been working on Levantine material, Anna Garnett has worked on Amara material and Jennifer Wexler has learnt and produced Horus and is now teaching others.



Anyone can make 3D models, we cannot stop the public coming in and doing so if they have the basic equipment and online 3D creation apps. (Our terms and conditions of entry prevent commercial use of photographs and 3D models made within the Museum campus.) Take a look at the work of Cosmo Wenman for example for someone who has captured British Museum objects to good effect, and the amazing stuff Tom Flynn has done like the Amitābha Buddha from Hancui.

Experimentation has also been ongoing within the Samsung Digital Discovery Centre led by Juno Rae and Lizzie Edwards (which builds on the work of Shelley Mannion, Kath Biggs and Faye Ellis) bringing 3D, AR and VR to our younger audiences. (As an aside, Kath Biggs printed our very first MicroPasts model in the Great Court of the Museum in bright blue plastic and I now use this piece to scrape ice off the car!)

There are also various other VR projects in production right now, you might see them by the time that this Q&A is published, but I cannot discuss these now. In the future, I can see more of our content being used to recreate ancient archaeological environments in the exhibition space – for example destroyed secular architecture and situating are collection into people’s imaginations. We may see more of the type of work that Stuart Eve did whilst completing his PhD at UCL as well (http://www.dead-mens-eyes.org).

11/ Do you have a fun fact to share about one of your model and/or a model that you particularly want to highlight?

The Jericho Skull was a fantastic model to create and came about from having lunch with the curator, Sally Fletcher, who said that it was off display and if I had time to scan it, I could do it. So on the same day, I took photographs (Tom Flynn did the same with the statue of Idrimi, King Alakah for Jamie Fraser, lunchtime chat, photographing at 5pm!) and processed the model overnight and it was ready the next day. The skull is one of the most amazing objects in the British Museum’s Middle East holdings and it was such a privilege to get access to it. However, saying that, I and my colleagues are so lucky to work in the Museum and to be given access to all of the amazing things that we have.

Thank you for asking me such a wide range of questions, if people have further questions you can always find me on Twitter.

You’re welcome Daniel! Thank you very much for your time and for the precious information and highlights you gave us.

If you are part of a cultural institution, get in touch with us at museums@sketchfab.com to set up your free business account

Cultural Heritage Spotlight: Q&A with Daniel Pett from the British Museum (Part 2)

Our Cultural institutions Page highlights our ongoing support of museums and cultural institutions with free accounts and access to tools. In Cultural Heritage Spotlight, we’ll explore museums and cultural institutions who are using 3D technology to bring new life to their collections. Today’s blog post features Daniel Pett’s effort to make the collections of the British Museum accessible for anyone in 3D and VR.


This article is part II of our Q&A with Daniel Pett, the Senior Digital Humanities Manager at the British Museum. The British Museum has published 143 models on Sketchfab. Daniel now gives us his feedback on his experiences, but also explains how Sketchfab helps him to showcase the British Museum’s wonderful artefacts.


4/ Was it easy when you began? Were they any barriers to entry?

Learning the basic technique was quite easy, as I learnt from Andy Bevan and Adi Keinan-Schoonbaert’s excellent tutorial documentation they wrote for MicroPasts. I then learnt more from practising the skills and from working alongside Thomas Flynn. As mentioned earlier the biggest barrier to doing this was time! There’s not enough time and with my young family and work commitments it is hard to fit it all in.


I’m still learning and there’s always something to improve on with every model I make. Different software might improve my output, better masking etc. This is one reason why I place all my raw photos and masks on to GitHub so that someone can improve on my amateur skills and have the chance to participate in reproducible science.


5/ Is it easier now?

I’d say it is probably easier now that I know how to do certain things or capture images in the best way for what I want to do. It is still hard! I’d like more time, more computing power and some decent cameras.


6/ What are the Sketchfab features that you think are the most useful for your models?

Going back to the MicroPasts project, we built an opensource 3D viewer  which we had intended for viewing our models. It created quite an overhead in support and development and was never as good as the commercial viewers we saw. There are other open source viewers, however, we believe that large reach and exposure of our work was vital for that project to succeed, so we made a decision to start sharing our models on Sketchfab. When you see the size of the network that uses Sketchfab, the decision to produce content for there is easy!

Benefits that we could glean (and this goes for BM work too and indeed any cultural institution) are:

  1. Embeddable content in a variety of platforms (HTML and social media for instance)
  2. Customisable viewing environment, one can deploy branded backgrounds if required
  3. Powerful editing tools
  4. Download settings, with licence selection
  5. View counts, commenting and a different social network to build.
  6. Mobile device ready

The Sketchfab interface is generally great, but there are things we have always wished were there, for example measuring tools (which Sketchfab’s dev team have now released, thank you!), scales, ability to create better metadata (maybe from a markdown file that you upload). However, we do realise that we (the cultural sector) are not the only users. You cannot build everything people request. In terms of long term repository for archiving 3D data, we don’t see the Sketchfab platform as the be and end all – for us it is a presentation layer. We and others need to consider best practices for storing their 3D output and source materials. At the moment, raw data and STL models that I create are being stored on our Github profile.


7/ What goal do you achieve in creating all these models? How is it consistent with the British Museum digital strategy?


The goal of creating these models has several aspects for me; firstly to demonstrate that there is a need for this work to be done and that it might create employment in the future for someone; secondly to maintain our position as a museum that might try to push boundaries; thirdly for personal satisfaction, that I can create something that others might find interesting (and allowing me to learn a new skill) and fourthly for my colleagues to make use of them.

At present, our BM digital strategy is not published in the public domain so I cannot reference it easily; however the generation of 3D data fits in firmly with the Digital Humanities plan that I am writing and is a fundamental basis of the digitisation strand. We are now expected to produce high resolution digital assets for public and professional consumption and reuse, 3D allows one to capture static images at the same time. In many ways, there’s a natural place for 3D imagery to sit, within our photography team who produce amazing work, but as yet don’t have the staffing capacity to create models.

In terms of Head of Digital’s vision for the museum of the future, 3D fits in well with his tenets of reach, mobile and revenue. As described earlier, we can use 3D to push BM content to potentially huge audiences via the Sketchfab embed mechanism on social media which satisfies the reach agenda. The Sketchfab mobile ready engine allows us to push the content out to mobile devices (on site this could be to museum hardware used at present for the multimedia guides or personal devices and offsite to daily mobile browsing.) The third tenet, revenue is just starting to make waves with some experimental work going on with ThinkSee3D and our image licensing arm (BM Images) starting to license our 3D for commercial publishing.

Thanks again for sharing, Daniel! Stay tuned for Part 3 of this Q&A next week.

If you are part of a cultural institution, get in touch with us at museums@sketchfab.com to set up your free business account.

Cultural Heritage Spotlight: Q&A with Daniel Pett from the British Museum (Part 1)

This was originally published as part 1 of a 3 part interview on the SketchFab blog in January 2017.

Our Cultural institutions Page highlights our ongoing support of museums and cultural institutions with free accounts and access to tools. In Cultural Heritage Spotlight, we’ll explore museums and cultural institutions who are using 3D technology to bring new life to their collections. Today’s blog post features Daniel Pett’s effort to make the collections of the British Museum accessible for anyone in 3D and VR.

Daniel Pett is a Senior Digital Humanities Manager at the British Museum. He has a background in Archaeology, having studied at the Institute of Archaeology (UCL) and Cambridge University and he has also worked in Telecoms and Investment Banking technology and subsequently as technical lead for the Portable Antiquities Scheme. He has also been co-lead on the MicroPasts project with Professor Andy Bevan (UCL) and now leads on the British Museum’s foray into the world of Digital Humanities, sitting in between the curatorial community of the Museum and the Digital and Publishing department. One of the most recent projects he delivered, was the new Knowledge Search application for the Museum, which brings together many of the Museum’s resources in one interface. 


As a side-project he has co-created one of the most breathtaking and largest 3D/VR collection of cultural artefacts in the world. The British Museum was early on Sketchfab, they created their account in October 2014: 121 3D models later, with more than 380K views on their models and 3000 likes, they are now the most followed Museum on Sketchfab. The Jericho Skull has been featured by CNN and the National Geographic:

The first foray into making British Museum 3D content available on Sketchfab was through the MicroPasts project, using crowdsourced photo masking and subsequently Thomas Flynn placed his models online under the BM banner. Daniel now shares the British Museum’s collections with the entire world making them easily accessible for educational purposes, scientific reasons and, of course, for anyone who is interested in Culture. Thanks to his initiative and knowledge transfer to colleagues, the British Museum helps democratizing culture and digitally preserve their collections.

Sharing their artefacts is also a way to promote the British Museum’s collections and encourage to actually go in the museum to discover them. It can also be a way to show the hidden artefacts since not all the collections of the museum are shown in the Museum or an easy way to manipulate fragile or very small/big artefacts.

Daniel will explain us today how he has been able to achieve all of this with a restricted budget and a short time allocation!

Daniel, thanks for answering our questions:

First, could you explain what is your process to create all these models? 

Our 3D work is all based around photogrammetry or Structure from Motion and builds on work first done in the Museum by Southampton University on Hoa Hakananai’a, then by the MicroPasts team and collaborators and finally by Thomas Flynn. The BM has seen other 3D scanning techniques employed, mostly via medical imaging (led by curator Daniel Antoine) and more costly methods but also some LIDAR usage and the work that CyArk conducted on our Assyrian reliefs. The famous archaeologist, Dominic Powlesland has also done 3D work on BM collections data (see following 3D Cremation Urn) and we’ve had collaborative work with students and academics since:

Our output is generally done so that anyone can replicate what we can do and this is what has found its way onto Sketchfab. Our basic process for capturing a sculpture in gallery for instance is this (and bear in mind we generally can only do this in opening hours and under gallery lighting):

  1. Find appropriate sculpture, usually on morning walk through gallery to office.
  2. Take photos at 5-10 degree intervals at low, mid and high levels
  3. To process the models, I generally use Agisoft’s Photoscan Pro (which was provided during the AHRC funded MicroPasts project) and Thomas Flynn has experimented with a variety of software.
  4. If complex, we deploy the photographs to MicroPasts for photomasking by our crowd contributors (any museum can use this facility, documentation on how to do this exists)
  5. Import masks into PhotoScan
  6. Align photos
  7. Build dense cloud
  8. Build mesh
  9. Build texture
  10. Upload models, masks, photoscan files and images to Github and obtain a DOI for the 3D capture
  11. Upload to Sketchfab under the license that the Museum allows (Creative Commons By Attribution Non-Commercial Share-Alike) under its interpretation of the Public Sector Information Act.
  12. Ask a curator if they will annotate the model and encode their knowledge for others to enjoy.

Some 3D models have been done in collaboration with other institutions. For example, the very recent Netsuke models have been produced by the Arts Research Center at Ristumeikan University, Japan:

Is it costly (in time, money, equipment, etc.)?

The biggest cost for all activities is time. We’re all time poor! We use very basic equipment for example mobile phones with decent sensors, low end digital SLRs and reasonable compact cameras allied with lazy susan turntables and tripods. At the moment, I’m just using my OnePlus3 mobile as my 1 year old daughter broke my DSLR when she pulled it off the table at home.


We have had no budget set aside (so far) for this work and 3D imagery is usually captured when the opportunity arises or as we walk through the galleries to work and the light is okay! The equipment I use is my own and most of the processing and masking is done in my own time (either on the commute to work or batched overnight). I use a MacBook Pro with 16GB RAM or an iMac with 20GB RAM. My work PC cannot handle the load! My colleagues (Jennifer Wexler, Andy Bevan, Chiara Bonacchi, Thomas Flynn and Adi Keinan-Schoonbaert) have access to a few machines of different types, and I am unsure what our contributors to MicroPasts used (for example Hugh Fiske who now produces great content for DigVentures).

What has been the turning point that made you think “we need to digitize our collection”?

The Museum has been digitising its records for many years now, building on Antiquarian and more recent analogue methods and the creation of 3D is just an extension of this process following the adoption of photography. What I want to achieve is a total record for museum objects (but as we have around 8 million objects and many unsuitable for 3D – for example in copyright works of art, culturally sensitive objects and extremely shiny metals) this will not be practicable.

So for the ones we can capture we have curatorial interpretation (the work that Neil Wilkin did on monumental dirks for instance – see The Beaune Dirk 3D model), multiple images, factual data, a 3D model that can be annotated and a wide variety of data points that can be linked off to other information sources.

No museum, apart from maybe the Smithsonian is making 3D data work effectively in their resource yet. I want to see our 3D work being not only a research tool, but a revenue creator, a knowledge sharing device and a way of allowing serendipitous reuse of Museum content. For example, we could make boxes of British Museum chocolates direct from the collection on demand (I’ve experimented with silicon moulds and 90% dark chocolate, the evidence has been eaten); we could make concrete casts of the Molossian Hound (one of my favourite pieces of sculpture in our collection) from the model on Sketchfab and see them in garden centres worldwide; we could see the shop not having to keep replicas in stock, but instead printing on demand; we could see manufacturers buying a license to produce mass replicas of BM content and the museum taking royalties.

Some of the things I’ve seen BM content being used for are really inspiring, for example Robert Kaleta’s PhD work at UCL, the fantastic Paul Mellon funded Digital Pilgrim from Amy Jeffs and Lloyd de Beer, or the Museum in a Box project that George Oates and Thomas Flynn run. I would ideally like to take the opportunity to scan any new acquisition (see for example the Virgin and Child) or objects that are going on loan for long periods (see for example the Ancient Lives collection) which allows the public to still see them in detail even if they cannot view the real thing.

We’re also seeing our 3D work propagating onto the museum floor as handling objects (for example the Egyptian house for the Sunken Cities exhibition handling desk), as information points in gallery (for example the Jericho Skull, statue of A’a and the Kakiemon ‘Boy on a Go board’) and in VR work that the museum has done on the Bronze Age and African Rock Art. For all of these bits of work, we’ve had supportive curatorial staff who generally have been enthused by chat in the canteen at lunch.

The use of 3D is now making things possible that the 2D representation cannot, but I do not believe Adrian Hon’s assertion that VR will break the museum. 3D has the potential to augment, enhance and improve the museum experience. Other areas where we’ve started to use 3D models and printing is within exhibition design, for example instead of card board mockups of display spaces we can now print directly from CAD models saving our designers lots of time.

Thanks again for sharing, Daniel! Stay tuned for Part2 of this Q&A next week.

If you are part of a cultural institution, get in touch with us at museums@sketchfab.com to set up your free business account.

Google Search Appliance – British Museum install

Working with Extended Content Solutions, I have been project/product managing the new ‘Knowledge Search tool’ for the British Museum. Built using AngularJS on ECS’s proprietary software, it pulls together multiple data sources through the use of Google Search Appliance.

This is one of the first products I have worked on, which isn’t open source.