SlideShare ist ein Scribd-Unternehmen logo
1 von 8
Downloaden Sie, um offline zu lesen
The Evolution of the Collections Database
Ian Rowson


This presentation is about how both changes in technology and wider influences have affected Collections
Management System (CMS) development, and how they will continue to do so. As I represent Adlib
information systems, this talk will have a strong adlib flavour.
I’m going to attempt to break it down into a chronological progression, and try not to get too bogged down in
the technicalities, although of course they do play their part in the story.


The beginnings of collections automation
Libraries led the way with their use of computer systems to store catalogue data in the late 1960s, and as a
software product, Adlib shares this heritage. Prior to this of course, libraries had employed card catalogues.
Computerised library systems largely rely on cataloguing according to the MARC standard, also developed in
the 60s – one copy of a book is, after all, very much like another, and therefore cataloguing can become a
largely automated process. Library catalogues were basically replacement for the old card index system,
workflow processes such as book purchasing and management of loans didn’t follow until later.


Adlib came on the scene in the mid 70’s, and was built using the FORTRAN 4 programming language. The
software was designed as a generic ‘information management’ tool. In other words, a software toolkit for
building database applications, a bit like the modern Microsoft Access or Filemaker Pro.
The first customer system was shipped in 1978.
A restriction on the uptake of automation in libraries was cost. This was the time way before PCs, so
computers were large, and expensive. Adlib software was designed to run on PRIME mini computers.
While researching this presentation I came across this great still from a tv advert for PRIME computers which
I have to share with you. We can laugh at the idea of ‘stepping into the 80s with Prime’, but doesn’t that
picture remind us (those of us who are old enough to remember the 80s, anyway) how far things have
moved on, technologically speaking?


And for me, this is the great dilemma of museum computing. We want to preserve our collections data, like
our collections, into the future. But can we be sure that in 20 years time that we won’t be laughing at the
technology we use today? I think we probably will be.


Technology races ahead at breakneck speed. Obsolescence of computing hardware, operating systems and
software not to mention data storage media are among the greatest risks posed to our data. It’s not just that
these issues may arise if we’re unlucky. They WILL arise, and so we have to be ready for them.
If you take only one thought away from this session, I’d like you to perhaps ponder about data preservation
issues in your own institution.


What led to the demise of PRIME, along with so many other computer manufacturers, was of course the
emergence of the personal computer, the PC. Fortunately, Adlib as a company had anticipated and prepared
for this eventuality, and our software had been successfully ported to MS-DOS, then and so on through the

                                                         1
different versions of Windows, with each development customer’s data was safely carried forward into the
new computing environment.


These changes happened in the form of a continuous evolution. At no point in time was a total cut-off
imposed upon the users of the Adlib software which “forced” them into adopting a new technology. Older
implementations faded away “naturally” and new technology was introduced gradually. This was, and still is,
a deliberate choice allowing users, but also the software developers and support staff, to live through smooth
transitions. For this reason new developments will continue to take place on multiple tracks, but with a
convergence towards the same technology


If planned correctly, a technology transition can occur as a natural process, almost unnoticed by the user.
For example, at the moment we are in the midst of another such transition. Like many of our competitors of a
similar age and heritage, Adlib started off running it’s own ‘proprietary’ or native database platform. This data
format is unique to our software.


Modern IT departments are reluctant to implement such databases, preferring instead to adopt the more
widely employed database platforms, such as MS SQL Server and ORACLE. About 4 years ago we adapted
Adlib to be able to run on those platforms, and we’ve been gradually upgrading customer systems, on
request, to use them.


Eventually we anticipate that the use of the proprietary database will eventually decline, but this process will
likely take a fair number of years. We certainly have no plans any time soon to withdraw support for the
many hundreds of ‘native’ adlib systems which are currently out in the field.


I mentioned earlier about how Adlib was developed as a ’database building kit’. The name ‘adlib’ actually
stands for ‘adaptive library’, meaning the structure of the system is flexible.


Adlib’s application building toolset, which, true to the original concept from the 70s, is shipped with each copy
of the software, enabled the trained system administrator to carry out a whole range of tasks, including
adding new databases or fields or indexes or screen layouts to the system. Such is the capability of this tool
that we have no need to use externally provided database software within our organisation. All our internal
systems, such as our customer relationship management and helpdesk databases, are built using our own
software.


The slide shows one screen from the current version of Adlib designer, which is windows based application.
Back in the early days of Adlib, this functionality was offered but through a character based interface that
ran from the operating system prompt. This was powerful, but quite tricky to use.


A library cataloguing system was the first commercially available product built using Adlib, but it wasn’t long
before customers using this application came to us and asked if we could build them a database for
recording their object collections as well. This was done on an ad-hoc basis, until the emergence of


                                                         2
Collections Trust’s (at that time MDA) Spectrum standard gave clear direction for software developers about
what a museum CMS should look like.
Adlib in fact played a supporting role in the development of the Spectrum standard, and you can be sure we
will continue to do so in future.


Incidentally, the same approach was adopted for the development of the Adlib archive application in the late
90s, although this time the standard to be implemented in the software was the archival management
standard, ISAD(G)


What I’ve simplistically sketched out so far is a linear form of technical development, eventually leading to
the Adlib Museum CMS package in use in over 1,500 institutions worldwide.


However, developments such as this which were mirrored across the world by many software companies,
were not universally welcomed by the museum profession in the early days. In her MA thesis entitled The
Evolution Of Museum Collection Management Software, Perian Sully describes how in the late 60s, IBM and
the US Metropolitan Museum of Art had convened a conference to discuss the future of computer technology
in US museums:


And I quote:
“This concern that curatorial or scholarly product would be overshadowed or undermined by the computer is
a recurrent topic to this day. This fear was summarized in 1968 by curator J.C. Gardin, when discussing the
institutional implications of collections technology. He asks if there is:


a) a danger of substituting superficial, mechanical knowledge for “organic and deeper form of culture” gained
from the personal work of curators,


b) a contradiction between rigidly organized data of the database and the intellectual viewpoints of personal
curatorial files, and


c) a risk of subordinating individual research to “de facto monopolies of information that may eventually have
the power to control the ‘whos’ and ‘whats’ of scientific inquiry?”


Despite the early worries of curators that their oversight and knowledge would not be properly reflected
within these new computer systems, the need for tracking and accountability of objects took centre stage
with other professionals.”
Sully, P (2006)


In other words, the great motivator for the uptake of CMS in US museums was the need to carry out audits
collections to be able to demonstrate accountability.


Sulley continues:


                                                         3
“Museums of all sizes found that they needed to get their record-keeping in order. In the 1960s, large
institutions had led the charge, but during the 1970s mid-sized museums realized that, they too, needed to
make sure their records were in order. Fortunately, computers had decreased substantially in cost. The
microcomputer became widely available to museums with fewer resources.”
Sully, P (2006)


Here in the UK, I would argue that although accountability was no doubt a driver in the early days, a great
impetus was felt in the late 1990’s by the new labour government’s ‘e-learning’ objectives, mainly set by the
National Grid for Learning, the vision for which was first outlined in the report Connecting the Learning
Society (Department for Education & Employment, 1997).


Then there came a swathe of texts focussed on delivering digital collections to fuel an ‘educational provision’
agenda, but these did tend to gloss over issues about the management or curatorship of the digital
collections developed for this purpose: See, for example; A Netful of Jewels: New Museums in the Learning
Age (National Museum Director’s Conference, 1999) and Building the Digital Museum: A National Resource
for the Learning Age.(Smith 2000) Together these texts signalled a new direction in policy which aimed to
fully establish learning as the central function of the museum. New technologies were deemed to be the
method of delivering that service to the wider community (Smith, L 2000).


The funding possibilities which ran ‘on the back of’ these initiatives led to a great expansion of CMS
implementation in the UK. This infamous ‘rush to digitise’ resulted in many projects that opened a window on
to collections data, some of which were perhaps were not quite ready to have a window opened on them –
mainly for reasons of incomplete or unverified data. After all, what museum is not carrying a documentation
backlog?


The overriding desire to open up collections data for educational/public access become a major justification
for accessing funding to undertake a CMS project. This of course, was made possible and desirable by the
increasing growth of the world wide web.


Current CMS have since matured to offer a bewildering array of functionality, not unlike business software
applications, such as MS Word for example. Who uses more than about 20% of the capabilities of these?


Although (like in the library model beforehand) CMS began as simple tools for cataloguing collections, they
are now used to track inventories, donor information, condition reports, artist biographies, exhibition
information, bibliographic texts, and curatorial papers as well as present multimedia files and interface with
the museum’s Website. The function is shifting from being a collections management system to a content
management system.
(Sully, P 2006)


We now take for granted such features as image/multi-media management, driven by needs to provide
exciting interactive material for web users, but enabled by the capabilities of the inexpensive powerful PC.


                                                       4
Also web driven is support for the ‘social networking’ phenomena, we are incorporating into adlib products
the ability to capture User Generated Content such as comments, tagging, uploaded images, etc.


However, despite all this functionality, from my own personal experience, (and this is borne out by Sully’s
research) the CMS installed in the average museum remains quite severely under-utilised. I wonder why this
should be?


Sulley did do some research into this issue. She tells us; Richard Gerrard looked at the number of failed
projects in the past and suggested that failure was a historical trend, because there was often early
enthusiasm for new features, buoyed by an infusion of grants. This, he said, created inflated expectations on
the part of users, a lack of critical examination by developers, and resistance within the institution’s
administrative structure. Soon thereafter, the feature which promises this great advancement in productivity
is abandoned in favor of the next technological wonder.
(Sully, P 2006)


My take on this is that it often seems that the purchase and installation of a CMS is championed by a
particular member of staff. When they leave to go to another job, systems can then often seem to ‘drift’
without direction. What is really needed is for a specific member of staff to be assigned to manage the
system, but often this does not happen in a smaller institution. It is much more reliant on the interests of
particular personalities, whose main job is invariably something else.


I’m going to bring things right up to date, to look at how current trends are shaping the CMS of the future.


A key driver of development at the moment is that of the API – applications programming interface
But what is an API, and why would you need one?


Modern computer program design (service oriented architecture - SOA) promotes the breaking up of
complex applications in small manageable components that communicate with each other using APIs.
Designing programs in this way not only makes a system flexible and scalable, but it also provides a platform
for integration between different software components (even from different vendors). Adlib currently supports
this model to some degree.


Let me give you a real-world example, that of a fairly recent development, the Adlib image handler API.


The idea behind this is as follows.


Adlib, like most other CMS packages, has in-built the capability to display linked images of collection objects.


However, many customers are already using other software with similar capabilities, such as content
management and/or digital asset management packages, leading to overlap and duplication of functionality.


Images which are stored in one software package need to be accessible from the others.
                                                        5
APIs offer a solution to this problem.


The Adlib media handler separates out from our software the image management function, in such a way
that it can be easily accessed by either adlib, or other external software. Furthermore, the possibility is also
raised that images held in other software (such as a DAMS) could be linked to by the Adlib CMS, instead of
using the Adlib image handler.


But we are not stopping there. In future, eventually all programs in the Adlib suite will follow the (SOA)
paradigm. To support this, a new set of APIs being developed, supporting both data access and metadata
access. The modules will be accessible though web services and as “traditional” (managed) DLLs.


External stakeholders (including customers) were invited to cooperate in the API development process
earlier this year, and development is already under way.


Another current development from the IT world is that of Cloud computing – but what does it mean?


I’ve reverted to Wikipedia for an explanation:


Cloud computing is a style of computing in which information technology resources are provided as a service
over the Internet. Users need not have knowledge of, expertise in, or control over the technology
infrastructure "in the cloud" that supports them. The term cloud is used as a metaphor for the Internet, based
on how the Internet is depicted in computer network diagrams, and is an abstraction for the complex
infrastructure it conceals.


The concept incorporates infrastructure as a service (IaaS), and software as a service (SaaS) as well as
other technology trends from the last couple of years that have the common theme of reliance on the Internet
for satisfying the computing needs of the users. Cloud computing services usually provide common business
applications online that are accessed from a web browser, while the software and data are stored on the
remote servers.


The key driver behind cloud computing is that users can avoid capital expenditure on hardware and software,
rather paying a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed,
like electricity) or subscription (e.g. time based, like a newspaper) basis with little or no upfront cost. Other
benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low
management overhead and immediate access to a broad range of applications. Users can generally
terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the
services are usually covered by service level agreement


According to Nicholas Carr the strategic importance of information technology is diminishing as it becomes
standardised and cheaper. He argues that the cloud computing paradigm shift is similar to the displacement
of electricity generators by electricity grids early in the 20th century.


                                                          6
(Wikipedia 2009)


Adlib have been offering our CMS systems ‘in the cloud’ as a subscription service for a couple of years now,
and while we have a few, mainly commercial customers using these services, generally speaking uptake
from the museum sector has been slow.


I’d suggest there are a couple of possible reasons for this:


       In the UK, museums so far have been able to access funding for capital projects from a variety of
        sources. Funding which pays an annual fee, on the other hand, is more difficult to raise.
       Museums are reluctant to hand-over custody of their data to an outside organisation, and of course
        there are risks associated with this which must be managed.


Wikipedia lists seven security issues which one should discuss with a cloud-computing vendor in order to
mitigate risks:


    1. Who has access to your data?
    2. Is the vendor is willing to undergo external audits and/or security certifications?
    3. Data location—ask if a provider allows for any control over the location of your data
    4. Data segregation—is data encryption is available?
    5. Recovery—find out what will happen to data in the case of a disaster; do they offer complete
        restoration and, if so, how long that would take?
    6. Investigative Support—enquire whether a vendor has the ability to investigate any inappropriate or
        illegal activity?
    7. Long-term viability—ask what will happen to data if the company goes out of business; how will data
        be returned and in what format?


In practice, one can best determine data-recovery capabilities by experiment: asking to get back some data,
seeing how long it takes, and verifying that it is correct.


(Wikipedia 2009)


Our brand name ADLIB stands for “ADaptive LIBrary” system and although the use of our software is no
longer restricted to just libraries, the “adaptive” or “flexible” qualification has always been retained as the key
benefit of using our software.


       Flexibility in the form of the Adlib Designer toolkit, which allows the trained System Administrator to
        make changes to both the database structure, and the behaviour of the software.


       Flexibility in the form of APIs which allow tight integration with other software applications in use
        within the institution, and allow data to be re-purposed in audio tours, on the web or by digital asset
        management systems.



                                                          7
   Flexibility in the form of different ways you can run the software – by traditional purchase, or ‘in the
        cloud’ as a service.


In the next generation of products we want to move a step further and place even more flexibility in the
hands of the actual user of the system, as opposed to the system administrator. In current versions this
process has already started, for instance by enabling the user to adapt their own toolbar, or by enabling the
user to generate reports on the fly by using the “print wizard”. This principle will be implemented throughout
with more “personal” preferences settings. One can think about search behaviour (e.g. default truncation or
provision of lists) or the appearance of the software (allows the user to add style sheets, personal output
formats, or change colour schemes).


One thing that you can be sure of, is that development of the Adlib product range will remain at leading edge
of CMS development. We understand that technology is a shifting sand on which to build, but we employ
proven strategies to deal with that.


Adlib has the experience and the capability to help all collecting institutions to secure their data for future
generations.


References
SMITH, L., (ed.) (2000) Building the Digital Museum: A National Resource for the Learning Age. Cambridge,

MDA

SULLY, P (2006) Inventory, Access, Interpretation: The Evolution Of Museum Collection Management
Software, [online] MA Thesis, John F. Kennedy University Available at:
http://conference.archimuse.com/biblio/inventory_access_interpretation_the_evolution_of_muse.html


http://en.wikipedia.org/wiki/API


http://en.wikipedia.org/wiki/Cloud_computing




                                                         8

Weitere ähnliche Inhalte

Andere mochten auch

プレゼンテーション1
プレゼンテーション1プレゼンテーション1
プレゼンテーション1HiyoriTomimoto
 
הרצאה של סימון בוטל בכנס
הרצאה של סימון בוטל בכנסהרצאה של סימון בוטל בכנס
הרצאה של סימון בוטל בכנסAlex Sklar
 
The Future of the Social Web in Crisis Management
The Future of the Social Web in Crisis ManagementThe Future of the Social Web in Crisis Management
The Future of the Social Web in Crisis ManagementKyros Vogiatzoglou
 
Ocak 2013 - Trend Raporu
Ocak 2013 - Trend RaporuOcak 2013 - Trend Raporu
Ocak 2013 - Trend RaporuKrombera
 
Shfaim no age 25 1 13
Shfaim no age 25 1 13Shfaim no age 25 1 13
Shfaim no age 25 1 13Alex Sklar
 
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...DevOpsDays Tel Aviv
 
DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...
DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...
DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...DevOpsDays Tel Aviv
 
Big Data Glossary of terms
Big Data Glossary of termsBig Data Glossary of terms
Big Data Glossary of termsKognitio
 
מצגת של הווי ליטווין
מצגת של הווי ליטוויןמצגת של הווי ליטווין
מצגת של הווי ליטוויןAlex Sklar
 
Big data meet up kognitio 2 10-14 f
Big data meet up kognitio 2 10-14 fBig data meet up kognitio 2 10-14 f
Big data meet up kognitio 2 10-14 fKognitio
 
プレゼンテーション1
プレゼンテーション1プレゼンテーション1
プレゼンテーション1HiyoriTomimoto
 
Big data and the bi wild west kognitio hiskey mar 2013
Big data and the bi wild west kognitio hiskey mar 2013Big data and the bi wild west kognitio hiskey mar 2013
Big data and the bi wild west kognitio hiskey mar 2013Kognitio
 
Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...
Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...
Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...DevOpsDays Tel Aviv
 
How To Survive Among Giants - Making Big Data Relevant
How To Survive Among Giants - Making Big Data RelevantHow To Survive Among Giants - Making Big Data Relevant
How To Survive Among Giants - Making Big Data RelevantKyros Vogiatzoglou
 
Planning digipak drafts
Planning digipak draftsPlanning digipak drafts
Planning digipak draftslaurenmorgan
 

Andere mochten auch (18)

プレゼンテーション1
プレゼンテーション1プレゼンテーション1
プレゼンテーション1
 
6th question
6th question6th question
6th question
 
הרצאה של סימון בוטל בכנס
הרצאה של סימון בוטל בכנסהרצאה של סימון בוטל בכנס
הרצאה של סימון בוטל בכנס
 
Final dps changes
Final dps changesFinal dps changes
Final dps changes
 
The Future of the Social Web in Crisis Management
The Future of the Social Web in Crisis ManagementThe Future of the Social Web in Crisis Management
The Future of the Social Web in Crisis Management
 
Ocak 2013 - Trend Raporu
Ocak 2013 - Trend RaporuOcak 2013 - Trend Raporu
Ocak 2013 - Trend Raporu
 
Shfaim no age 25 1 13
Shfaim no age 25 1 13Shfaim no age 25 1 13
Shfaim no age 25 1 13
 
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...
 
DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...
DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...
DevOps Days Tel Aviv 2013: How not to do Devops: Confessions of a Thought Lea...
 
Big Data Glossary of terms
Big Data Glossary of termsBig Data Glossary of terms
Big Data Glossary of terms
 
מצגת של הווי ליטווין
מצגת של הווי ליטוויןמצגת של הווי ליטווין
מצגת של הווי ליטווין
 
Shooting script
Shooting scriptShooting script
Shooting script
 
Big data meet up kognitio 2 10-14 f
Big data meet up kognitio 2 10-14 fBig data meet up kognitio 2 10-14 f
Big data meet up kognitio 2 10-14 f
 
プレゼンテーション1
プレゼンテーション1プレゼンテーション1
プレゼンテーション1
 
Big data and the bi wild west kognitio hiskey mar 2013
Big data and the bi wild west kognitio hiskey mar 2013Big data and the bi wild west kognitio hiskey mar 2013
Big data and the bi wild west kognitio hiskey mar 2013
 
Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...
Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...
Containing Chaos with Kubernetes - Terrence Ryan, Google - DevOpsDays Tel Avi...
 
How To Survive Among Giants - Making Big Data Relevant
How To Survive Among Giants - Making Big Data RelevantHow To Survive Among Giants - Making Big Data Relevant
How To Survive Among Giants - Making Big Data Relevant
 
Planning digipak drafts
Planning digipak draftsPlanning digipak drafts
Planning digipak drafts
 

Ähnlich wie The evolution of the collections management system

Collections Databases; Making the system work for you
Collections Databases; Making the system work for youCollections Databases; Making the system work for you
Collections Databases; Making the system work for youirowson
 
How To Maximize Your Computer Hardware And Software Resources
How To Maximize Your Computer Hardware And Software ResourcesHow To Maximize Your Computer Hardware And Software Resources
How To Maximize Your Computer Hardware And Software ResourcesRobynn Dixon
 
Data Integration Lecture
Data Integration LectureData Integration Lecture
Data Integration LectureSUNY Oneonta
 
Integrated Analysis Of Service Learning Age Group Late...
Integrated Analysis Of Service Learning Age Group Late...Integrated Analysis Of Service Learning Age Group Late...
Integrated Analysis Of Service Learning Age Group Late...Sandra Campbell
 
My personal journey through the World of Open Source! How What Was Old Beco...
My personal journey through  the World of Open Source!  How What Was Old Beco...My personal journey through  the World of Open Source!  How What Was Old Beco...
My personal journey through the World of Open Source! How What Was Old Beco...Ceph Community
 
Planning and Managing Digital Library & Archive Projects
Planning and Managing Digital Library & Archive ProjectsPlanning and Managing Digital Library & Archive Projects
Planning and Managing Digital Library & Archive Projectsac2182
 
Review of big data analytics (bda) architecture trends and analysis
Review of big data analytics (bda) architecture   trends and analysis Review of big data analytics (bda) architecture   trends and analysis
Review of big data analytics (bda) architecture trends and analysis Conference Papers
 
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your Data
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your DataCloudera Breakfast: Advanced Analytics Part II: Do More With Your Data
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your DataCloudera, Inc.
 
How To Scale A Server To Meet The Needs Of Twenty Users Essay
How To Scale A Server To Meet The Needs Of Twenty Users EssayHow To Scale A Server To Meet The Needs Of Twenty Users Essay
How To Scale A Server To Meet The Needs Of Twenty Users EssayStacey Wilson
 
Mongo Internal Training session by Soner Altin
Mongo Internal Training session by Soner AltinMongo Internal Training session by Soner Altin
Mongo Internal Training session by Soner Altinmustafa sarac
 
Security Of Nosql Database Against Intruders Essay
Security Of Nosql Database Against Intruders EssaySecurity Of Nosql Database Against Intruders Essay
Security Of Nosql Database Against Intruders EssayMelissa Williams
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 

Ähnlich wie The evolution of the collections management system (20)

Collections Databases; Making the system work for you
Collections Databases; Making the system work for youCollections Databases; Making the system work for you
Collections Databases; Making the system work for you
 
How To Maximize Your Computer Hardware And Software Resources
How To Maximize Your Computer Hardware And Software ResourcesHow To Maximize Your Computer Hardware And Software Resources
How To Maximize Your Computer Hardware And Software Resources
 
Data Integration Lecture
Data Integration LectureData Integration Lecture
Data Integration Lecture
 
Cloud Libraries
Cloud LibrariesCloud Libraries
Cloud Libraries
 
Integrated Analysis Of Service Learning Age Group Late...
Integrated Analysis Of Service Learning Age Group Late...Integrated Analysis Of Service Learning Age Group Late...
Integrated Analysis Of Service Learning Age Group Late...
 
My personal journey through the World of Open Source! How What Was Old Beco...
My personal journey through  the World of Open Source!  How What Was Old Beco...My personal journey through  the World of Open Source!  How What Was Old Beco...
My personal journey through the World of Open Source! How What Was Old Beco...
 
Planning and Managing Digital Library & Archive Projects
Planning and Managing Digital Library & Archive ProjectsPlanning and Managing Digital Library & Archive Projects
Planning and Managing Digital Library & Archive Projects
 
Rethinking_the_LSP_Jan2016a
Rethinking_the_LSP_Jan2016aRethinking_the_LSP_Jan2016a
Rethinking_the_LSP_Jan2016a
 
Review of big data analytics (bda) architecture trends and analysis
Review of big data analytics (bda) architecture   trends and analysis Review of big data analytics (bda) architecture   trends and analysis
Review of big data analytics (bda) architecture trends and analysis
 
The Importance Of Big Data
The Importance Of Big DataThe Importance Of Big Data
The Importance Of Big Data
 
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your Data
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your DataCloudera Breakfast: Advanced Analytics Part II: Do More With Your Data
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your Data
 
Ai Library
Ai LibraryAi Library
Ai Library
 
lecture-1-1487765601.pptx
lecture-1-1487765601.pptxlecture-1-1487765601.pptx
lecture-1-1487765601.pptx
 
How To Scale A Server To Meet The Needs Of Twenty Users Essay
How To Scale A Server To Meet The Needs Of Twenty Users EssayHow To Scale A Server To Meet The Needs Of Twenty Users Essay
How To Scale A Server To Meet The Needs Of Twenty Users Essay
 
History Of Operating Systems
History Of Operating SystemsHistory Of Operating Systems
History Of Operating Systems
 
Data Mining
Data MiningData Mining
Data Mining
 
Mongo Internal Training session by Soner Altin
Mongo Internal Training session by Soner AltinMongo Internal Training session by Soner Altin
Mongo Internal Training session by Soner Altin
 
Security Of Nosql Database Against Intruders Essay
Security Of Nosql Database Against Intruders EssaySecurity Of Nosql Database Against Intruders Essay
Security Of Nosql Database Against Intruders Essay
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Aggregation as tactic sm new
Aggregation as tactic sm newAggregation as tactic sm new
Aggregation as tactic sm new
 

The evolution of the collections management system

  • 1. The Evolution of the Collections Database Ian Rowson This presentation is about how both changes in technology and wider influences have affected Collections Management System (CMS) development, and how they will continue to do so. As I represent Adlib information systems, this talk will have a strong adlib flavour. I’m going to attempt to break it down into a chronological progression, and try not to get too bogged down in the technicalities, although of course they do play their part in the story. The beginnings of collections automation Libraries led the way with their use of computer systems to store catalogue data in the late 1960s, and as a software product, Adlib shares this heritage. Prior to this of course, libraries had employed card catalogues. Computerised library systems largely rely on cataloguing according to the MARC standard, also developed in the 60s – one copy of a book is, after all, very much like another, and therefore cataloguing can become a largely automated process. Library catalogues were basically replacement for the old card index system, workflow processes such as book purchasing and management of loans didn’t follow until later. Adlib came on the scene in the mid 70’s, and was built using the FORTRAN 4 programming language. The software was designed as a generic ‘information management’ tool. In other words, a software toolkit for building database applications, a bit like the modern Microsoft Access or Filemaker Pro. The first customer system was shipped in 1978. A restriction on the uptake of automation in libraries was cost. This was the time way before PCs, so computers were large, and expensive. Adlib software was designed to run on PRIME mini computers. While researching this presentation I came across this great still from a tv advert for PRIME computers which I have to share with you. We can laugh at the idea of ‘stepping into the 80s with Prime’, but doesn’t that picture remind us (those of us who are old enough to remember the 80s, anyway) how far things have moved on, technologically speaking? And for me, this is the great dilemma of museum computing. We want to preserve our collections data, like our collections, into the future. But can we be sure that in 20 years time that we won’t be laughing at the technology we use today? I think we probably will be. Technology races ahead at breakneck speed. Obsolescence of computing hardware, operating systems and software not to mention data storage media are among the greatest risks posed to our data. It’s not just that these issues may arise if we’re unlucky. They WILL arise, and so we have to be ready for them. If you take only one thought away from this session, I’d like you to perhaps ponder about data preservation issues in your own institution. What led to the demise of PRIME, along with so many other computer manufacturers, was of course the emergence of the personal computer, the PC. Fortunately, Adlib as a company had anticipated and prepared for this eventuality, and our software had been successfully ported to MS-DOS, then and so on through the 1
  • 2. different versions of Windows, with each development customer’s data was safely carried forward into the new computing environment. These changes happened in the form of a continuous evolution. At no point in time was a total cut-off imposed upon the users of the Adlib software which “forced” them into adopting a new technology. Older implementations faded away “naturally” and new technology was introduced gradually. This was, and still is, a deliberate choice allowing users, but also the software developers and support staff, to live through smooth transitions. For this reason new developments will continue to take place on multiple tracks, but with a convergence towards the same technology If planned correctly, a technology transition can occur as a natural process, almost unnoticed by the user. For example, at the moment we are in the midst of another such transition. Like many of our competitors of a similar age and heritage, Adlib started off running it’s own ‘proprietary’ or native database platform. This data format is unique to our software. Modern IT departments are reluctant to implement such databases, preferring instead to adopt the more widely employed database platforms, such as MS SQL Server and ORACLE. About 4 years ago we adapted Adlib to be able to run on those platforms, and we’ve been gradually upgrading customer systems, on request, to use them. Eventually we anticipate that the use of the proprietary database will eventually decline, but this process will likely take a fair number of years. We certainly have no plans any time soon to withdraw support for the many hundreds of ‘native’ adlib systems which are currently out in the field. I mentioned earlier about how Adlib was developed as a ’database building kit’. The name ‘adlib’ actually stands for ‘adaptive library’, meaning the structure of the system is flexible. Adlib’s application building toolset, which, true to the original concept from the 70s, is shipped with each copy of the software, enabled the trained system administrator to carry out a whole range of tasks, including adding new databases or fields or indexes or screen layouts to the system. Such is the capability of this tool that we have no need to use externally provided database software within our organisation. All our internal systems, such as our customer relationship management and helpdesk databases, are built using our own software. The slide shows one screen from the current version of Adlib designer, which is windows based application. Back in the early days of Adlib, this functionality was offered but through a character based interface that ran from the operating system prompt. This was powerful, but quite tricky to use. A library cataloguing system was the first commercially available product built using Adlib, but it wasn’t long before customers using this application came to us and asked if we could build them a database for recording their object collections as well. This was done on an ad-hoc basis, until the emergence of 2
  • 3. Collections Trust’s (at that time MDA) Spectrum standard gave clear direction for software developers about what a museum CMS should look like. Adlib in fact played a supporting role in the development of the Spectrum standard, and you can be sure we will continue to do so in future. Incidentally, the same approach was adopted for the development of the Adlib archive application in the late 90s, although this time the standard to be implemented in the software was the archival management standard, ISAD(G) What I’ve simplistically sketched out so far is a linear form of technical development, eventually leading to the Adlib Museum CMS package in use in over 1,500 institutions worldwide. However, developments such as this which were mirrored across the world by many software companies, were not universally welcomed by the museum profession in the early days. In her MA thesis entitled The Evolution Of Museum Collection Management Software, Perian Sully describes how in the late 60s, IBM and the US Metropolitan Museum of Art had convened a conference to discuss the future of computer technology in US museums: And I quote: “This concern that curatorial or scholarly product would be overshadowed or undermined by the computer is a recurrent topic to this day. This fear was summarized in 1968 by curator J.C. Gardin, when discussing the institutional implications of collections technology. He asks if there is: a) a danger of substituting superficial, mechanical knowledge for “organic and deeper form of culture” gained from the personal work of curators, b) a contradiction between rigidly organized data of the database and the intellectual viewpoints of personal curatorial files, and c) a risk of subordinating individual research to “de facto monopolies of information that may eventually have the power to control the ‘whos’ and ‘whats’ of scientific inquiry?” Despite the early worries of curators that their oversight and knowledge would not be properly reflected within these new computer systems, the need for tracking and accountability of objects took centre stage with other professionals.” Sully, P (2006) In other words, the great motivator for the uptake of CMS in US museums was the need to carry out audits collections to be able to demonstrate accountability. Sulley continues: 3
  • 4. “Museums of all sizes found that they needed to get their record-keeping in order. In the 1960s, large institutions had led the charge, but during the 1970s mid-sized museums realized that, they too, needed to make sure their records were in order. Fortunately, computers had decreased substantially in cost. The microcomputer became widely available to museums with fewer resources.” Sully, P (2006) Here in the UK, I would argue that although accountability was no doubt a driver in the early days, a great impetus was felt in the late 1990’s by the new labour government’s ‘e-learning’ objectives, mainly set by the National Grid for Learning, the vision for which was first outlined in the report Connecting the Learning Society (Department for Education & Employment, 1997). Then there came a swathe of texts focussed on delivering digital collections to fuel an ‘educational provision’ agenda, but these did tend to gloss over issues about the management or curatorship of the digital collections developed for this purpose: See, for example; A Netful of Jewels: New Museums in the Learning Age (National Museum Director’s Conference, 1999) and Building the Digital Museum: A National Resource for the Learning Age.(Smith 2000) Together these texts signalled a new direction in policy which aimed to fully establish learning as the central function of the museum. New technologies were deemed to be the method of delivering that service to the wider community (Smith, L 2000). The funding possibilities which ran ‘on the back of’ these initiatives led to a great expansion of CMS implementation in the UK. This infamous ‘rush to digitise’ resulted in many projects that opened a window on to collections data, some of which were perhaps were not quite ready to have a window opened on them – mainly for reasons of incomplete or unverified data. After all, what museum is not carrying a documentation backlog? The overriding desire to open up collections data for educational/public access become a major justification for accessing funding to undertake a CMS project. This of course, was made possible and desirable by the increasing growth of the world wide web. Current CMS have since matured to offer a bewildering array of functionality, not unlike business software applications, such as MS Word for example. Who uses more than about 20% of the capabilities of these? Although (like in the library model beforehand) CMS began as simple tools for cataloguing collections, they are now used to track inventories, donor information, condition reports, artist biographies, exhibition information, bibliographic texts, and curatorial papers as well as present multimedia files and interface with the museum’s Website. The function is shifting from being a collections management system to a content management system. (Sully, P 2006) We now take for granted such features as image/multi-media management, driven by needs to provide exciting interactive material for web users, but enabled by the capabilities of the inexpensive powerful PC. 4
  • 5. Also web driven is support for the ‘social networking’ phenomena, we are incorporating into adlib products the ability to capture User Generated Content such as comments, tagging, uploaded images, etc. However, despite all this functionality, from my own personal experience, (and this is borne out by Sully’s research) the CMS installed in the average museum remains quite severely under-utilised. I wonder why this should be? Sulley did do some research into this issue. She tells us; Richard Gerrard looked at the number of failed projects in the past and suggested that failure was a historical trend, because there was often early enthusiasm for new features, buoyed by an infusion of grants. This, he said, created inflated expectations on the part of users, a lack of critical examination by developers, and resistance within the institution’s administrative structure. Soon thereafter, the feature which promises this great advancement in productivity is abandoned in favor of the next technological wonder. (Sully, P 2006) My take on this is that it often seems that the purchase and installation of a CMS is championed by a particular member of staff. When they leave to go to another job, systems can then often seem to ‘drift’ without direction. What is really needed is for a specific member of staff to be assigned to manage the system, but often this does not happen in a smaller institution. It is much more reliant on the interests of particular personalities, whose main job is invariably something else. I’m going to bring things right up to date, to look at how current trends are shaping the CMS of the future. A key driver of development at the moment is that of the API – applications programming interface But what is an API, and why would you need one? Modern computer program design (service oriented architecture - SOA) promotes the breaking up of complex applications in small manageable components that communicate with each other using APIs. Designing programs in this way not only makes a system flexible and scalable, but it also provides a platform for integration between different software components (even from different vendors). Adlib currently supports this model to some degree. Let me give you a real-world example, that of a fairly recent development, the Adlib image handler API. The idea behind this is as follows. Adlib, like most other CMS packages, has in-built the capability to display linked images of collection objects. However, many customers are already using other software with similar capabilities, such as content management and/or digital asset management packages, leading to overlap and duplication of functionality. Images which are stored in one software package need to be accessible from the others. 5
  • 6. APIs offer a solution to this problem. The Adlib media handler separates out from our software the image management function, in such a way that it can be easily accessed by either adlib, or other external software. Furthermore, the possibility is also raised that images held in other software (such as a DAMS) could be linked to by the Adlib CMS, instead of using the Adlib image handler. But we are not stopping there. In future, eventually all programs in the Adlib suite will follow the (SOA) paradigm. To support this, a new set of APIs being developed, supporting both data access and metadata access. The modules will be accessible though web services and as “traditional” (managed) DLLs. External stakeholders (including customers) were invited to cooperate in the API development process earlier this year, and development is already under way. Another current development from the IT world is that of Cloud computing – but what does it mean? I’ve reverted to Wikipedia for an explanation: Cloud computing is a style of computing in which information technology resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams, and is an abstraction for the complex infrastructure it conceals. The concept incorporates infrastructure as a service (IaaS), and software as a service (SaaS) as well as other technology trends from the last couple of years that have the common theme of reliance on the Internet for satisfying the computing needs of the users. Cloud computing services usually provide common business applications online that are accessed from a web browser, while the software and data are stored on the remote servers. The key driver behind cloud computing is that users can avoid capital expenditure on hardware and software, rather paying a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed, like electricity) or subscription (e.g. time based, like a newspaper) basis with little or no upfront cost. Other benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low management overhead and immediate access to a broad range of applications. Users can generally terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the services are usually covered by service level agreement According to Nicholas Carr the strategic importance of information technology is diminishing as it becomes standardised and cheaper. He argues that the cloud computing paradigm shift is similar to the displacement of electricity generators by electricity grids early in the 20th century. 6
  • 7. (Wikipedia 2009) Adlib have been offering our CMS systems ‘in the cloud’ as a subscription service for a couple of years now, and while we have a few, mainly commercial customers using these services, generally speaking uptake from the museum sector has been slow. I’d suggest there are a couple of possible reasons for this:  In the UK, museums so far have been able to access funding for capital projects from a variety of sources. Funding which pays an annual fee, on the other hand, is more difficult to raise.  Museums are reluctant to hand-over custody of their data to an outside organisation, and of course there are risks associated with this which must be managed. Wikipedia lists seven security issues which one should discuss with a cloud-computing vendor in order to mitigate risks: 1. Who has access to your data? 2. Is the vendor is willing to undergo external audits and/or security certifications? 3. Data location—ask if a provider allows for any control over the location of your data 4. Data segregation—is data encryption is available? 5. Recovery—find out what will happen to data in the case of a disaster; do they offer complete restoration and, if so, how long that would take? 6. Investigative Support—enquire whether a vendor has the ability to investigate any inappropriate or illegal activity? 7. Long-term viability—ask what will happen to data if the company goes out of business; how will data be returned and in what format? In practice, one can best determine data-recovery capabilities by experiment: asking to get back some data, seeing how long it takes, and verifying that it is correct. (Wikipedia 2009) Our brand name ADLIB stands for “ADaptive LIBrary” system and although the use of our software is no longer restricted to just libraries, the “adaptive” or “flexible” qualification has always been retained as the key benefit of using our software.  Flexibility in the form of the Adlib Designer toolkit, which allows the trained System Administrator to make changes to both the database structure, and the behaviour of the software.  Flexibility in the form of APIs which allow tight integration with other software applications in use within the institution, and allow data to be re-purposed in audio tours, on the web or by digital asset management systems. 7
  • 8. Flexibility in the form of different ways you can run the software – by traditional purchase, or ‘in the cloud’ as a service. In the next generation of products we want to move a step further and place even more flexibility in the hands of the actual user of the system, as opposed to the system administrator. In current versions this process has already started, for instance by enabling the user to adapt their own toolbar, or by enabling the user to generate reports on the fly by using the “print wizard”. This principle will be implemented throughout with more “personal” preferences settings. One can think about search behaviour (e.g. default truncation or provision of lists) or the appearance of the software (allows the user to add style sheets, personal output formats, or change colour schemes). One thing that you can be sure of, is that development of the Adlib product range will remain at leading edge of CMS development. We understand that technology is a shifting sand on which to build, but we employ proven strategies to deal with that. Adlib has the experience and the capability to help all collecting institutions to secure their data for future generations. References SMITH, L., (ed.) (2000) Building the Digital Museum: A National Resource for the Learning Age. Cambridge, MDA SULLY, P (2006) Inventory, Access, Interpretation: The Evolution Of Museum Collection Management Software, [online] MA Thesis, John F. Kennedy University Available at: http://conference.archimuse.com/biblio/inventory_access_interpretation_the_evolution_of_muse.html http://en.wikipedia.org/wiki/API http://en.wikipedia.org/wiki/Cloud_computing 8