Vol49_2_2006


569

ANNALS  OF  GEOPHYSICS, VOL.  49, N.  2/3, April/June  2006

Key  words ocean observatory – science require-
ments

1. Introduction

Since the 1800’s, oceanographers have ex-
plored and sampled across two-thirds of Earth
primarily using ships as observational plat-
forms. This has yielded a series of snapshot
views of the oceans which have limited resolu-
tion in time. Measurements and models from
this exploratory, mapping and sampling, phase

of oceanography have resulted in growing
recognition of the diversity and complexity of
processes that operate above, within and be-
neath the oceans. The questions posed from
these efforts increasingly cannot be answered
using only the tools of the present, in large part
because of a limited ability to resolve temporal
change. For this and other reasons, the ocean
sciences are beginning a new phase in which
scientists will enter the ocean environment and
adaptively observe the Earth-Ocean system.
Routine, long term access to episodic oceanic
processes is crucial to continued growth in the
understanding and predictive modeling of com-
plex natural phenomena that are highly vari-
able, spanning many scales of space and time. 

This new ocean sciences paradigm will be
implemented using innovative facilities called
ocean observatories which provide unprecedent-

Science requirements and the design 
of cabled ocean observatories

Alan D. Chave (1), Gene Massion (2) and Hitoshi Mikada (3)
(1) Deep Submergence Laboratory, Woods Hole Oceanographic Institution, Woods Hole, MA, U.S.A.

(2) Monterey Bay Aquarium Research Institute, Moss Landing, CA, U.S.A.
(3) Department of Civil and Earth Resources Engineering, Kyoto University, Japan

Abstract
The ocean sciences are beginning a new phase in which scientists will enter the ocean environment and adap-
tively observe the Earth-Ocean system through remote control of sensors and sensor platforms. This new ocean
science paradigm will be implemented using innovative facilities called ocean observatories which provide un-
precedented levels of power and communication to access and manipulate real-time sensor networks deployed
within many different environments in the ocean basins. Most of the principal design drivers for ocean observa-
tories differ from those for commercial submarine telecommunications systems. First, ocean observatories re-
quire data to be input and output at one or more seafloor nodes rather than at a few land terminuses. Second,
ocean observatories must distribute a lot of power to the seafloor at variable and fluctuating rates. Third, the
seafloor infrastructure for an ocean observatory inherently requires that the wet plant be expandable and recon-
figurable. Finally, because the wet communications and power infrastructure is comparatively complex, ocean
observatory infrastructure must be designed for low life cycle cost rather than zero maintenance. The origin of
these differences may be understood by taking a systems engineering approach to ocean observatory design
through examining the requirements derived from science and then going through the process of iterative refine-
ment to yield conceptual and physical designs. This is illustrated using the NEPTUNE regional cabled observa-
tory power and data communications sub-systems.

Mailing address: Dr. Alan D. Chave, Deep Submer-
gence Laboratory, Woods Hole Oceanographic Institution,
Woods Hole, MA 02543, U.S.A.; e-mail: alan@whoi.edu



570

Alan D. Chave, Gene Massion and Hitoshi Mikada

ed levels of power and communication to access
and manipulate real-time sensor networks de-
ployed within many different environments in
the ocean basins. These facilities, their real time
or near-real time information flow, and the data
archives associated with them, will empower en-
tirely new approaches to science. In addition,
ocean observatories will enable educational-out-
reach capabilities that can dramatically impact
general understanding of, and public attitudes to-
ward, the ocean sciences and science in general.

The crucial role ocean observatories will
play in 21st century oceanography has received
international recognition, and early programs
are underway around the world (e.g., Beranzoli
et al., 1998; Momma et al., 1998; Delaney et al.,
2000; Glenn et al., 2000; Kasahara et al., 2000;
Austin et al., 2002; Chave et al., 2002; Favali 
et al., 2002; Hirata et al., 2002; Schofield et al.,
2002; Petitt et al., 2002; Beranzoli et al., 2003;
Dewey and Tunnicliffe, 2003). Major ocean ob-
servatory infrastructure programs are under con-
sideration in Japan (Advanced Real-Time Earth
monitoring Network in the Area or ARENA;
Shirasaki et al., 2003), Europe (The European
Seafloor Observatory Network or ESONET;
http://www.abdn.ac.uk/eco-system/esonet/), and
the United States (Ocean Observatories Initia-
tive or OOI; Clark and Isern, 2003). The largest
component of the OOI is a US-Canada regional
cabled observatory called NEPTUNE (North-

East Pacific Time-integrated Undersea Net-
worked Experiment) for which the Canadians
received C$62M in late 2003. Almost all of the
cited installations use submarine cables to link
land to seafloor, and hence the remainder of this
paper will focus on cabled ocean observatories.

2. A generic cabled ocean observatory

In an attempt to establish common terminol-
ogy, a generic cabled ocean observatory struc-
ture will be defined. A comprehensive descrip-
tion of the hardware design and implementation
of cabled ocean observatories may be found in
Chave et al. (2004). A software or cyberinfra-
structure framework for ocean observatories is
described by St. Arnaud et al. (2004).

Figure 1 shows one or more sensors deployed
in the water. A suite of sensors is the fundamental
measurement device at an observatory, and is ulti-
mately the source of a data stream. Sensors are
part of, or attached to, instruments. Instruments
are attached to instrument ports on an observato-
ry node located on the seafloor via an access lay-
er data communications connection (typically,
RS232/RS422 serial or10/100BaseT Ethernet).
The observatory node also supplies power (typi-
cally, 12 or 48 VDC) and may distribute accurate
time using standard codes. Custom instrument
ports may support special instruments with

Fig. 1. Cartoon illustrating the major components of a cabled ocean observatory. See text for discussion. Core
and PI (Principal Investigator) instruments represent a preliminary classification of sensors that will be refined
in the process of system development.



571

Science requirements and the design of cabled ocean observatories

unique needs. Standard instrument ports are more
generic. Ocean observatory nodes are connected
to each other and to a shore station via a core lay-
er data communications link (typically, high
speed serial or Gigabit Ethernet via a backbone
submarine fiber optic cable). The backbone cable
also contains an electrical conductor, and provides
power to the observatory nodes from shore with a
seawater return path. An ocean observatory oper-
ations center monitors, maintains, controls, and
manages the components of the obsevatory. 

An observatory instrument control process is
used by operators or guest scientists to control
node instrument ports, instruments, and sensors.
Core instruments are managed by the ocean ob-
servatory operator or their designees. Individual
investigator instruments may be one of a kind,
and are deployed on behalf of a guest scientist.
The instrument data logging process gathers real-
time or near real-time data (and sometimes in-
strument metadata) from instruments and stores it
temporarily. This step may happen in the water,
on shore, or both. The data archive process gath-
ers/receives data and/or metadata from the instru-
ment logging process or in some cases directly
from the instrument itself. The data archive
process may extract instrument metadata from
the data stream, post-process the data stream, or
manage it in some other way. In some ocean ob-
servatories, a streaming data process provides
subscribers with real-time data from sensors/in-
struments. In most cases, the ocean observatory is
connected to the Internet, and an observatory
server provides scientists and the public with ac-
cess to certain observatory services. 

3. Ocean observatory design requirements

The commercial marketplace has driven the
design of submarine fiber optic telecommunica-
tion systems to very high data rates using a Dense
Wavelength Division Multiplexed (DWDM)
physical layer implemented using comparatively
simple, very high reliability submarine equip-
ment. Electronic complexity is concentrated at a
very small number of shore stations. Combined
with advances in submarine cable installation and
burial, the result is extremely reliable communi-
cations infrastructure.

Many of the principal design drivers for
ocean observatories differ from those for
conventional submarine telecommunications
systems. First, ocean observatories require data
to be input and output (i.e., switched and aggre-
gated) at one or more seafloor nodes rather than
at a few land terminuses. Second, ocean observa-
tories must distribute a lot of power (typically,
multiple kW per node) to the seafloor at variable
and fluctuating rates to supply both seafloor in-
struments and the observatory hotel load. Third,
science requires the delivery of accurate (typical-
ly, order 1 µs in an absolute sense) time to sea-
floor instruments which has no counterpart in the
commercial world. Fourth, the seafloor infra-
structure for an ocean observatory is inherently
dynamic, and hence the wet plant has to be ex-
pandable and reconfigurable to meet changing
science needs. Finally, because the wet commu-
nications and power infrastructure is compara-
tively complex, ocean observatory infrastructure
must be designed for low cost maintenance and
upgradeability.

Despite these differences, a key design driv-
er in both ocean observatory and commercial
telecommunications system design is reliabili-
ty. A primary reliability measure is the proba-
bility that data will be received on shore or at
another seafloor instrument or node from a giv-
en science instrument on the seafloor, and the
least reliable infrastructure components in this
path inevitably are the node power and commu-
nications electronics. As further discussed by
Chave et al. (2004), an immediate corollary is
that there may be no reliability gain from com-
bining high cost, high reliability submarine
telecommunications wet plant with lower relia-
bility node electronic systems.

Taken together, these points suggest that the
overall design of ocean observatories will be fun-
damentally different from that of submarine tele-
communications systems. Understanding why
this is true is a principal purpose of this paper, and
is facilitated by taking a system engineering view
of the ocean observatory design process. Howev-
er, although design requirements differ between
telecommunications and science systems, the
high quality submarine fiber optic cables, joints,
terminations, and wet hardware, as well as the in-
stallation and burial expertise of industry, are



572

Alan D. Chave, Gene Massion and Hitoshi Mikada

highly relevant to ocean observatory implementa-
tion.

4. The system engineering process

A system is defined as a collection of func-
tional elements which work together to perform
some defined set of functions. It consists of a
hierarchy of sub-systems which taken in aggre-
gate comprise the whole system. The observa-
tory node in fig. 1 is one sub-system in the
ocean observatory system, and may in turn be
divided into a power sub-system, a backbone
communications sub-system, an instrument in-
terface sub-system, and so on. 

System engineering is a management and
engineering process which brings a system into
being. Its development as an engineering disci-
pline grew out of the need to manage and inte-
grate increasingly complex technological proj-
ects. Recent texts on the subject include Eisner
(1997), Blanchard (1998), and Stevens et al.
(1998). Neither the system engineering process
nor its procedures are uniquely defined, but one
common implementation is contained in Mili-
tary Standard 499B (1991). The main stages of
this standard as outlined here are typically in-
cluded in any well-designed system engineering
implementation.

Figure 2 is a top-level view of the Mil Std
499B system engineering process. It contains
three main stages: 1) requirements analysis; 2)
functional analysis/allocation; 3) synthesis; and
4) a continuous, cross-cutting system analysis
and control phase.

The requirements analysis stage focusses on
two types of requirements: user and perform-
ance. In the context of an ocean observatory, the
user requirements flow from science needs, and
are identified through the analysis of a wide
range of use scenarios incorporating representa-
tive suites of sensors and platforms. The goal is
to identify and define the set of functions that the
system must do to meet the user requirements,
and to place bounds on how well these functions
must be carried out. These are called the func-
tional and performance requirements, respective-
ly. In the sequel, these will collectively be re-
ferred to as the science requirements. 

The second system engineering stage is func-
tional analysis/allocation, in which the objectives
defined by the science requirements are decom-
posed into a set of lower-level functions and per-
formance allocations are applied to each of them.
The functional interfaces and a candidate func-
tional architecture are defined. This process is it-
erative, and constant verification that the science
requirements are being met must be achieved.
The third system engineering stage is synthesis,
in which all elements of the functional design are
transformed into a physical design. Synthesis be-
gins at a concept design level, then passes on to a
preliminary design level where risk is mitigated
by testing key parts of the hardware and software,
and then passes to a detailed design level where
full prototypes are constructed. This stage is also
iterative, both with the preceding functional ana-
lysis/allocation stage to ensure that the required
functionality is being provided and with the re-
quirements analysis phase to verify that the sci-
ence requirements are being met.

The cross-cutting phase of the system engi-
neering process is system analysis and control.

Fig. 2. Cartoon illustrating the major stages and
phases of the Military Standard 499B system engi-
neering process. See text for discussion.



573

Science requirements and the design of cabled ocean observatories

This consists of a set of trade-off studies com-
paring the feasibility, performance, and cost of
alternative technical approaches along with a
set of over-arching tasks to manage risk, con-
figuration, and interfaces. Ongoing documenta-
tion and technical reviews are also a system
analysis and control function.

The focus in the remainder of this paper will
primarily be on the requirements analysis phase
and how this leads to functional and physical im-
plementations of an ocean observatory system.
As an illustration of the refinement process using
trade studies, two selected elements will be ex-
amined in detail.

5. Science requirements for a cabled ocean
observatory

In the sequel, selected science requirements
for the NEPTUNE Regional Cabled Observatory
(RCO) will illustrate the system engineering
process used to derive them. NEPTUNE is con-
ceived as a multi-node regional observatory com-
prising 26 seafloor science nodes with two shore
connections covering the Juan de Fuca plate off
northwest North America (Delaney et al., 2000).
Figure 3 presents a notional layout for NEP-
TUNE. The NEPTUNE science requirements are
broad and generic, and could easily apply to oth-

Fig. 3. Map illustrating the notional topology of the NEPTUNE regional cabled observatory. The letter labels
delineate specific nodes, while the numerical labels give the optical link lengths in km.



574

Alan D. Chave, Gene Massion and Hitoshi Mikada

er installations such as the planned ARENA sys-
tem in Japan or components of ESONET in Eu-
rope. In the feasibility study for NEPTUNE
(NEPTUNE Consortium, 2000), an initial survey
of the range of science that could be accom-
plished with an RCO was carried out, including
the development of use scenarios and an assess-
ment of the characteristics of instruments that
might be installed. This serves as initial input to
definition of the science requirements. A similar
procedure was used to define the science require-
ments for GEOSTAR, as documented in Thiel 
et al. (1994).

As should be clear from fig. 2, deriving the
science requirements is an iterative process. At
the outset, functional and performance require-
ments may be posed by the user community that
cannot be met with available technology, or even
that may not be consistent with known physics.
The inner requirements and design loops com-
bined with the outer verification loop (fig. 2)
serve to improve the initial requirements and
eliminate such problems. The science require-
ments presented here are the product of several
such stages of iterative refinement, but are still
considered to be in draft form until the synthesis
stage passes from the concept through the de-
tailed design levels.

The science requirements to be discussed
are divided into general, power network, and
data communications network categories, and
are the most mature ones for NEPTUNE. There
are numerous other categories in the RCO, in-
cluding time distribution, observatory control,
science instrument interface, user/observatory
interaction, data management and archiving, se-
curity, operations, and reliability which will not
be discussed.

5.1. Design principles

The design principles are overarching re-
quirements which apply to all parts of the RCO,
and serve as guiding principles through the en-
tire design process. There are ten design princi-
ples for the RCO, each of which is denoted by
a keyword.

The first and second design principles es-
tablish life and cost goals for the RCO. The lat-

ter is important, as virtually all ocean observa-
tory installations will be cost-capped. Life cy-
cle cost is defined as the sum of expenditures
for Research, Development, Test, and Evalua-
tion (RDT&E), procurement and installation,
and Operations and Maintenance (O&M) over
the design life of the system. 

A.1. Lifetime – The RCO shall operate,
with appropriate maintenance, for a design life
of at least 25 years.

A.2. Cost – The RCO shall be designed to
minimize the life cycle cost.

The third through sixth design principles
state that all components of the RCO have to be
capable of reconfiguration or extension to adapt
to a changing mission after installation, and that
the basic infrastructure has to be designed to be
flexible and able to accommodate changes in
technology over time. Static designs would not
suffice, and technologies which inherently lim-
it the number of nodes that can be implemented
over a given part of the RCO would not be ap-
propriate.

A.3. Reconfigurability – The RCO shall al-
low all resources to be dynamically-directed
where science needs and priorities dictate.

A.4. Scalability – The RCO shall be ex-
pandable, so that additional science nodes
which meet the observatory reliability goals can
be placed near or at locations of interest that
may develop in the future.

A.5. Extendability – The RCO shall support
individual instruments or clusters of instru-
ments at sites up to 100 km away from the sci-
ence nodes with possibly reduced power and
communications capability and reliability.

A.6. Upgradeability – The RCO shall be
upgradeable to accommodate future technology
improvements.

The seventh and eighth design principles es-
tablish reliability and fault tolerance goals for
the RCO. These are quantified later in the sci-
ence requirements, and are intended to establish
the manner in which they will be derived. The
reliability design principle states that the inte-
gral probability of data transmission from sea-
floor node to shore (or another seafloor node)
will be the principal measure of RCO reliabili-
ty. A design which is highly reliable in some
parts of the system but offers no improvement



575

Science requirements and the design of cabled ocean observatories

in the total probability of instrument connectiv-
ity is not inherently superior. Other criteria,
such as life cycle cost, must be applied to com-
pare designs with similar reliability.

A.7. Robustness – The RCO shall utilize
fault tolerant design principles for both hard-
ware and software to minimize potential single
points of failure,

A.8. Reliability – The primary measure of
RCO reliability shall be the probability of being
able to send data from any science instrument
to shore and/or to other science nodes, exclu-
sive of instrument functionality.

The ninth design principle specifies that all
components of the RCO have to be designed
with a forward-looking rather than a status quo
perspective in order to meet future science needs.

A.9. Futurecasting – The RCO shall have
functionality and performance significantly be-
yond that required to support current use sce-
narios so that experiments and instruments that
may reasonably be anticipated to develop over
the expected life of the facility can be accom-
modated. 

The final design principle states that all of
the designs for hardware and software elements
of the RCO shall be public insofar as possible.
An ocean observatory should be viewed as an
end-to-end scientific instrument, and hence
black box elements between instrument and
user are not appropriate.

A.10. Open design principle – The RCO
hardware designs and specifications shall be
freely and openly available, and all software el-
ements shall be based on open standards to the
greatest extent possible.

5.2. Power network

The power network design requirements are
specific to the distribution of power to supply
both the RCO infrastructure and seafloor sensors.
Where specific numbers appear, these represent
requirement and design loop refinement based in
particular on the voltage and resistance rating of
standard submarine fiber optic cable. The prem-
ise is that the design of custom cable would 
be prohibitively expensive, and the additional
weight from adding copper to reduce the intrinsic

resistance per unit length of the cable would lead
to installation and handling problems. 

The first and second requirements specify re-
spectively that primary requirements are maxi-
mizing the average and peak power to all nodes
simultaneously and maintaining the flexibility to
direct more power to a small number of selected
nodes. Designs which either inherently favor
some nodes over others or lack flexibility to allo-
cate power would not be appropriate.

B.1. The power network shall provide the
maximum average continuous user power
equally to all nodes, with a goal of 5 kW.

B.2. The power network shall provide the
maximum peak user power to as many nodes as
possible, with a goal of 10 kW.

The third requirement states that the power
system will always start up in a predictable
manner.

B.3. The power network shall be in a known
state upon power up.

The fourth and fifth requirements specify
that minimizing the effect of faults (either phys-
ical faults on the backbone cable or power sys-
tem problems) on science and localizing them
when they occur are important science needs.

B.4. The power network shall be able to de-
tect and localize infrastructure problems, in-
cluding (but not necessarily limited to) shunt
faults or breaks of the backbone cable, high re-
sistance faults, and node power system failures.

B.5. The power network shall isolate failed
sections of backbone cable or failed nodes such
that the remaining operational nodes can func-
tion normally.

The sixth and seventh science requirements
state that monitoring and allocating power at in-
dividual science nodes is a critical need.

B.6. The power network shall provide volt-
age and current monitoring functionality ade-
quate to detect and localize infrastructure and
user instrument problems, including (but not
necessarily limited to) over-current faults in
user and infrastructure loads and ground faults.

B.7. The power network shall allow power
delivery to all nodes and science ports to be
scheduled and prioritized.

The final design requirement serves to pro-
tect both the infrastructure and attached science
instruments from damage due to unwanted cur-



576

Alan D. Chave, Gene Massion and Hitoshi Mikada

rent flow through housings and connectors. An
additional need to monitor instruments and con-
nectors for ground faults appears elsewhere in
the science requirements document.

B.8. All pressure cases, including support
frames or assemblies electrically connected to
them, shall be DC isolated from all signal and
power circuits in the RCO.

Additional environmental requirements lead
to detailed specifications for power supply rip-
ple, total harmonic distortion, and the like dur-
ing the design cycle.

5.3. Data communications network

The data communication network science
requirements are specific to the backbone data
transport for an RCO. Specifications for the
link between instruments and the infrastructure
are stated elsewhere. Where specific numbers
appear, these represent requirement and design
loop refinement based on forward-looking esti-
mates of bandwidth and instrument needs. 

From use scenarios and investigation of the
communications requirements of a wide range of
instruments, a preliminary estimate of both the
aggregate bandwidth of an RCO and the maxi-
mum data rate for an individual instrument can
be derived. The aggregate bandwidth can then be
scaled by a large factor (10 or more) to allow for
future growth and technological development.
One significant unknown is the maximum instru-
ment data rate; except for high density television
(HDTV) operating in an uncompressed mode at
a data rate in excess of 1 Gb/s, it is difficult to
identify anything which requires more than a few
times 10 Mb/s. Since it is unlikely that extensive
use of uncompressed HDTV will be required,
the total data rate does not reflect this number.
The alternative is to scale the bandwidth upward
in C.1, significantly raising system costs. Note
that C.2 states the data rate that must be deliv-
ered to the user at all times in the RCO life cycle
to allow for aging effects.

C.1. The data communication network shall
support an aggregate backbone data rate of at
least 8 Gb/s.

C.2. Each data communication node subsys-
tem shall support an aggregate instrument data

rate of at least 1 Gb/s exclusive of overhead for
system functions such as (but not necessarily
limited to) framing or re-transmission due to er-
rors, at all times during the observatory life cycle.

The third requirement states that the data
communications system will always start up in
a predictable manner

C.3. The data communication network shall
be in a known state upon power up.

The fourth science requirements state that
the data communications system needs to be
automatically fault-tolerant. Most standard data
communications systems meet this requirement
automatically.

C.4. Each data communications node sub-
system shall automatically reconfigure itself to
suppress fault propagation, and shall automati-
cally recover from faults.

The fifth and sixth science requirements
specify a need for remote monitoring and con-
trol of the data communications infrastructure.
C.5 is met automatically by most modern data
transport protocols. C.6 anticipates the standard
use of backdoor access into communications
hardware, and flows from reliability require-
ments and the remote deployment of complex
hardware on the seafloor.

C.5. Each data communication node sub-
system shall be monitored and controlled over
the data communication network using standard
protocols.

C.6. Each data communication node sub-
system shall be monitored and controlled over a
high reliability, auxiliary channel.

The seventh science requirement states that
the data communications network must be dy-
namically changeable to accommodate chang-
ing science needs.

C.7. The data communication network shall
be remotely re-configurable so that data trans-
mission to and from each node can be sched-
uled and prioritized.

The eighth science requirement specifies in-
strument data rates that must be supported.
These are stated in more detail in the instrument
interface specifications.

C.8. Each data communication node sub-
system shall support a maximum data rate from
each instrument of 100 Mb/s, and for at least
one instrument or secondary node at 1 Gb/s, us-



577

Science requirements and the design of cabled ocean observatories

ing standard Internet protocols, including (but
not necessarily limited to) TCP/IP and standard
application layer protocols.

The final two data communications science
requirements are derived from future-casting the
current and emerging state-of-the-art in sensor
network development in accordance with A.9.
While the initial expectation for NEPTUNE was
that communications would only occur between
numerous seafloor instruments and a few land
sites, recent developments in sensor networks
suggest that this is very likely to change in the
future. The design of distributed, intelligent, self-
organizing sensor networks (e.g., «smart dust»,
«sensor webs») based on low cost, miniaturized
(i.e. MEMS technology) sensors is an exciting
and rapidly evolving area of research, and it is
reasonable to expect that this will port to the
seafloor in the not very distant future. Implemen-
tation requires minimum latency inter-node and
inter-sensor communications paths on the
seafloor in addition to links to land.

C.9. The data communication network shall
facilitate intra-node, peer-to-peer communica-
tion by user instruments with the minimum pos-
sible latency commensurate with direct inter-in-
strument propagation delay.

C.10. The data communication network
shall facilitate inter-node peer-to-peer commu-
nication by user instruments with the minimum
possible latency commensurate with direct in-
ter-node propagation delay.

6. Implementation and trade-study 
refinement

The iterative refinement of the science re-
quirements and concomitant development of
the functional and physical designs is complex
and multi-faceted, and it is nearly impossible to
describe the entire process. Instead, two illus-
trative examples will be given for the power and
data communications systems, respectively. 

6.1. Parallel versus serial power system

The technical issues surrounding the choice
of serial or parallel power for NEPTUNE are

described in Howe et al. (2002). Series connec-
tions of terrestrial sources and the fixed loads at
seafloor optical amplifiers (along with associat-
ed I 2R losses on the submarine cable) with a
single seawater return path is always used for
standard submarine telecommunications sys-
tems. Parallel connections of the loads, in
which each optical amplifier would have a sea-
water ground connection, has not been used,
and in fact appears to offer no advantages in
this application.

For a serial power meshed system such as
NEPTUNE (fig. 3), additional complexity is
posed by the need to regenerate the power twice
or more at branches and for active reconnection
to handle faults. Hardware to implement these
functions does not currently exist, although
there appear to be no overwhelming obstacles
to constructing it. Perhaps more significantly,
because power must be conserved at a branch
splitter, there is no efficiency gain to branching
with serial power. By contrast, a parallel power
system is easy to branch and gains efficiency
when there are multiple paths to each load. In
addition, a parallel system inherently has a
higher power capability because the I2R losses
are always lower due to the use of multiple
rather than a single ground. However, fault de-
tection on a branched parallel system is inher-
ently more complex due to multiple paths to a
break. This can be mitigated by including a ca-
pability in the nodes to isolate each section of
cable.

These observations about ease of branching
and power delivery limits, as well as considera-
tion of the voltage and current capabilities of
commercial-off-the-shelf submarine fiber optic
cable, led to the decision to utilize a parallel
rather than a serial power system for NEP-
TUNE. Meeting science requirements B.1 and
B.2 would be much more difficult if not impos-
sible using a serial power system for a large
meshed system like NEPTUNE. 

6.2. Meshed network versus star data 
communications architecture

The NEPTUNE data communication design
is based on an implementation of Gigabit Eth-



578

Alan D. Chave, Gene Massion and Hitoshi Mikada

ernet using a DWDM physical layer to provide
up to 16 bi-directional Gb/s channels over a sin-
gle pair of optical fibers. The NEPTUNE back-
bone topology (fig. 3) is meshed, and the com-
munications system gains reliability from re-
dundant paths, as is also true for the Internet.
There are multiple arguments for this design
approach, as reviewed by Maffei et al. (2003). 

An alternative approach has also been con-
sidered based on a star topology, in which each
seafloor node communicates with each shore sta-
tion using a single pair of wavelengths in a
DWDM system, but does not communicate di-
rectly with any other node except through a
shore station. This network topology was used in
the early days of data networking when routing
hardware was primitive and costly, but is obso-
lete and no longer used. The star network would
be implemented using point-to-point SONET
data transport with custom optical add/drop mul-
tiplexing at each node and submarine telecom-
munications-standard optical amplifiers to pro-
vide path gain. The premise was that the extreme
reliability of submarine optical amplifiers (typi-
cally, of order 10 FITS; 1 FIT equals 1 failure in
109 h) would produce significantly higher data
communications network reliability. 

A complete comparison of these two ap-
proaches is complex, but four simple observa-
tions can be used to derive a first order version:

1) To lowest order, the star network is more
expensive than the Gigabit Ethernet system by
the cost of 50+ submarine optical amplifiers
and the non-recurring engineering required to
implement optical add/drop multiplexing, for a
total difference of about US$30M. 

2) Based on statistical modeling, the data
network reliability based on the criterion of sci-
ence requirement A.7 is slightly better for the
Gigabit Ethernet approach, although the differ-
ence in barely significant. The much higher
backbone reliability of the star network ap-
proach does not translate into higher overall re-
liability because the node electronic reliability
is orders of magnitude lower.

3) The failure rate of individual node
SONET electronic systems for the star network
approach is about 3 times higher than that of
high quality Gigabit Ethernet hardware. This
translates into higher O&M costs as more fail-

ures will occur per unit time, presuming that all
failures are repaired as soon as possible after
they occur. Given the higher procurement costs
of the star network design, the life cycle costs
of this approach will be higher, and probably
much higher, than the Ethernet implementation.

4) The inter-node latency of the star net-
work design will depend on the actual pair of
nodes being considered, but will in general be
high because of the necessity to transit to shore
and back to the seafloor. A typical value will be
tens of ms with no tendency to be lower for ad-
jacent than sub-adjacent nodes. By contrast, the
adjacent inter-node latency of the Ethernet sys-
tem approaches the goal in criterion C.9, with
sub-adjacent node latency being nearly integral
multiples of this value.

Taken in aggregate, these observations miti-
gate against the star network and in favor of the
Gigabit Ethernet approach in several ways.

These examples indicate that the design of
ocean observatories is a complex engineering
process, and can be facilitated using a systems
engineering approach based on requirements
derived from science needs and use scenarios.

REFERENCES

AUSTIN, T.C., J.B. EDSON, W.R. MCGILLIS, M. PURCELL,
R.A. PETITT JR., M.K. MCELROY, C.W. GRANT, J. WARE
and S.K. HURST (2002): A network-based telemetry ar-
chitecture developed for the Martha’s Vineyard Coastal
Observatory, IEEE J. Ocean. Eng., 27, 228-234.

BERANZOLI, L., A. DE SANTIS, G. ETIOPE, P. FAVALI, F. FRU-
GONI, G. SMRIGLIO, F. GASPARONI and A. MARIGO
(1998): GEOSTAR: a GEophysical and Oceanograph-
ic STation for Abyssal Research, Phys. Earth Planet.
Inter., 108, 175-183.

BERANZOLI, L., T. BRAUN, M. CALCARA, P. CASALE, A. DE
SANTIS, G. D’ANNA, D. DI MAURO, G. ETIOPE, P. FAVALI,
J.-L. FUDA, F. FRUGONI, F. GAMBERI, M. MARANI, C. MIL-
LOT, C. MONTUORI and G. SMRIGLIO (2003): Mission re-
sults from the first GEOSTAR Observatory (Adriatic
Sea, 1998), Earth Planets Space, 55, 361-374.

BLANCHARD, B.S. (1998): System Engineering Management
(John Wiley, New York), 2nd edition.

CHAVE, A.D., F.K. DUENNEBIER, R. BUTLER, R.A. PETITT, JR.,
F.B. WOODING, D. HARRIS, J.W. BAILEY, E. HOBART, J.
JOLLY, A.D. BOWEN and D.R. YOERGER (2002): H2O: the
Hawaii-2 Observatory, in Science-Technology Synergy
for Research in the Marine Environment: Challenges for
the XXI Century, edited by L. BERANZOLI, P. FAVALI and
G. SMRIGLIO, Developments in Marine Technology Se-
ries (Elsevier, Amsterdam), 12, 83-92.

CHAVE, A.D., G. WATERWORTH, A.R. MAFFEI and G. MAS-



579

Science requirements and the design of cabled ocean observatories

SION (2004): Cabled ocean observatory systems, Mar.
Technol. Soc. J., 38, 31-43.

CLARK, H.L. and A. ISERN (2003): Cabled observatories for
ocean research: a component of the Ocean Observato-
ries Initiative, in Proceedings of the 3rd International
Workshop on Scientific Use of Submarine Cables and
Related Technologies (IEEE, Piscataway), 209-214.

DELANEY, J.R., G.R. HEATH, B. HOWE, A.D. CHAVE and H.
KIRKHAM (2000): NEPTUNE: real-time ocean and
Earth sciences at the scale of a tectonic plate, Oceanog-
raphy, 13, 71-83.

DEWEY, R. and V. TUNNICLIFFE (2003): VENUS: future sci-
ence on a coastal mid-depth observatory, in Proceed-
ings of the 3rd International Workshop on Scientific
Use of Submarine Cables and Related Technologies
(IEEE, Piscataway), 232-233.

EISNER, H. (1997): Essentials of Project and Systems Engi-
neering Management (John Wiley, New York).

FAVALI, P., G. SMRIGLIO, L. BERANZOLI, T. BRAUN, M. CAL-
CARA, G. D’ANNA, A. DE SANTIS, D. DI MAURO, G. ETIO-
PE, F. FRUGONI, V. IAFOLLA, S. MONNA, C. MONTUORI, S.
NOZZOLI, P. PALANGIO and G. ROMEO (2002): Towards a
permanent deep-sea observatory: the GEO-STAR Euro-
pean experiment, in Science-Technology Synergy for Re-
search in the Marine Environment: Challenges for the
XXI Century, edited by L. BERANZOLI, P. FAVALI and G.
SMRIGLIO, Developments in Marine Technology Series
(Elsevier, Amsterdam), 12, 111-120.

GLENN, S.M., T.D. DICKEY, B. PARKER and W. BOICOURT
(2000): Long-term real-time coastal ocean observation
networks, Oceanography, 13, 24-34.

HIRATA, K., M. AOYAGI, H. MIKADA, K. KAWAGUCHI, Y. KAIHO,
R. IWASE, S. MORITA, I. FUJISAWA, H. SUGIOKA, K. MIT-
SUZAWA, K. SUYEHIRO, H. KINOSHITA and N. FUJIAWARA
(2002): Real-time geophysical measurements on the deep
seafloor using submarine cable in the Southern Kurile
subduction zone, IEEE J. Ocean Eng., 27, 170-181.

HOWE, B.M., H. KIRKHAM and V. VORPERIAN (2002): Pow-
er system considerations for undersea observatories,
IEEE J. Ocean Eng., 27, 267-274.

KASAHARA, J., Y. SHIRASAKI and H. MOMMA (2000): Multi-
disciplinary geophysical measurement on the ocean floor
using decommissioned submarine cables: VENUS proj-
ect, IEEE J. Ocean Eng., 25, 111-120.

MAFFEI, A., J. BAILEY, A. BRADLEY, A.D. CHAVE, X. GAR-
CIA, H. GELMAN, S. LERNER, G. MASSION and D.
YOERGER (2003): A modular gigabit Ethernet backbone
for NEPTUNE and other ocean observatories, in Pro-
ceedings of the 3rd International Workshop on Scientif-
ic Use of Submarine Cables and Related Technologies
(IEEE, Piscataway), 191-196.

MILITARY STANDARD 499B (1991): Systems Engineering
(US Department of Defense, Washington DC).

MOMMA, H., R. IWASE, K. MITSUZAWA, Y. KAIHO and Y. FU-
JIWARA (1998): Preliminary results of a three-year con-
tinuous observation by a deep seafloor observatory in
Sagami Bay, Central Japan, Phys. Earth Planet. Inter.,
108, 263-274.

NEPTUNE CONSORTIUM (2000): NEPTUNE Feasibility Study
(available on line: http://www.neptune.washington.edu/
pub/documents/documents.html), pp. 106.

PETITT, R.A., F.B. WOODING, D. HARRIS, J.W. BAILEY, E.
HOBART, J. JOLLY, A.D. CHAVE, F.K. DUENNEBIER and
R. BUTLER (2002): The Hawaii-2 Observatory, IEEE J.
Ocean Eng., 27, 245-253.

SCHOFIELD, O., T. BERGMANN, P. BISSETT, J.F. GRASSLE,
D.B. HAIDVOGEL, J. KOHUT, M. MOLINE and S.M.
GLENN (2002): The long-term ecosystem observatory:
an integrated coastal observatory, IEEE J. Ocean Eng.,
27, 146-154,

SHIRASAKI, Y., M. YOSHIDA, T. NISHIDA, K. KAWAGUCHI, H.
MIKADA and K. ASAKAWA (2003): ARENA: a versatile
and multidisciplinary scientific submarine cable net-
work of next generation, in Proceedings of the 3rd In-
ternational Workshop on Scientific Use of Submarine
Cables and Related Technologies (IEEE, Piscataway),
226-231.

ST. ARNAUD, B., A.D. CHAVE, A. MAFFEI, E. LASZOWKA,
L.SMARR and G. GOPALAN (2004): An integrated ap-
proach to ocean observatory data acquisition/manage-
ment and infrastructure control using web services,
Mar. Techol. Soc. J., 38, 155-163.

STEVENS, R., P. BROOK, K. JACKSON and S. ARNOLD (1998):
Systems Engineering (Pearson, Prentice Hall, London).

THIEL, H., K.-O. KIRSTEIN, C. LUTH, U. LUTH, G. LUTHER,
L.-A. MEYER-REIL, O. PFANNKUCHE and M. WEYDERT
(1994): Scientific requirements for an abyssal benthic
laboratory, J. Mar. Syst., 4, 421-439.