CA2034651A1 - Distributed protocol for improving the survivability of telecommunications trunk networks - Google Patents

Distributed protocol for improving the survivability of telecommunications trunk networks

Info

Publication number
CA2034651A1
CA2034651A1 CA002034651A CA2034651A CA2034651A1 CA 2034651 A1 CA2034651 A1 CA 2034651A1 CA 002034651 A CA002034651 A CA 002034651A CA 2034651 A CA2034651 A CA 2034651A CA 2034651 A1 CA2034651 A1 CA 2034651A1
Authority
CA
Canada
Prior art keywords
link
node
network
failure
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002034651A
Other languages
French (fr)
Inventor
Brian A. Coan
Mario P. Vecchi
Liang T. Wu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iconectiv LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2034651A1 publication Critical patent/CA2034651A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/08Intermediate station arrangements, e.g. for branching, for tapping-off
    • H04J3/085Intermediate station arrangements, e.g. for branching, for tapping-off for ring networks, e.g. SDH/SONET rings, self-healing rings, meashed SDH/SONET networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0075Fault management techniques
    • H04Q3/0079Fault management techniques involving restoration of networks, e.g. disaster recovery, self-healing networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0028Local loop
    • H04J2203/0039Topology
    • H04J2203/0042Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0057Operations, administration and maintenance [OAM]
    • H04J2203/006Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13141Hunting for free outlet, circuit or channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13167Redundant apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13396Signaling in general, in-band signalling

Abstract

ABSTRACT OF THE INVENTION

A method for reconfiguring a telecommunications network (10) comprising a plurality of reconfigurable cross-connect nodes (A, B, C, D) interconnected by links (1 = 1, 2, 3, 4, 5) when a failure event occurs is disclosed. The method comprises storing at each node a precomputed configuration table corresponding to each of a plurality of possible network topologies which can result from a plurality of possible failure events. After a specific failure event occurs, the network is flooded with messages so that each of the nodes is informed as to the specific existing topology of the network resulting from the specific failure event.
The nodes are then reconfigured in accordance with the precomputed configuration tables which correspond to the specific existing network topology.

Description

Z03465~

Field of the Invention The present invention relates to a distributed protocol for maintaining the call earrying capaeity of a reeonfigurable trunk network after the failure of one or more network components.

8aekaround of the Invention A trunk network illustratively eomprises a plurality of nodes intereonneeted by transmission links. Two technological advanees have had a substantial impaet on the trunk network.
These are the extensive deployment of high-bandwidth fiber optic transmission links and the deployment of reeonfigurable digital lS eross-eonnoct nodeq. (See, e.g., Rodney J. Boehm, et al, "Standardized Fiber Optic Transmission Systems - A Synchronous Optieal Network View", IEEE Journal on Seleeted Areas in Comm., Vol. SAC-4, No. 9, Deeember 1986; Satoshi Hasegawa et al, "Dynamie Reeonfiguration of Digital Cross-Connect Systems With Network Control and Management", G~OBECOM '87) Typically, each reconfigurable digital eross-eonnect node includes a digital cross-eonnect switch and configuration table whieh determines how the links ineident to the node are interconnected by the cross-connect switeh.
2S The new teehnologies have increased the need for survivability strategies in the event of the failure of one or more network components. In particular, the aggregation of ..

....
, 203465~
traffic which has been accelerated by the deployment of high-bandwidth fiber-optic transmission links has increased the amount of damage which can be caused by a single failure. The accidental severing of a single fiber-optic cable can disrupt S tens of thousands of connections. Fortunately, the deployment of reconfigurable digital cross-connect nodes together with the abundant capacity of fiber links greatly increasQs the ability of a trunk network to recover from failure events. After a fiber-optic cable has been cut or a digital cros~-connect node fails, it is possible to use the surviving nodes to reconfigure the network to restorQ much of the lost call-carrying capacity.
In view of the foregoing, it is an ob~ect of the present invention to provide a distributed protocol for maintaining the call-carrying capacity of a telecommunications trunk network after the failure of certain network components. In particular, it is an ob;ect of the presQnt invention to provide a distributed protocol which is executQd at each node of a trunk network comprising high-bandwidth optical fiber links and reconfigurable digital cross-connect nodes for maintaining the call-carrying capacity of the network in the event of a single or multiple node or link failure.

Summary of the Invention In a preferred embodiment, the inventive protocol comprises storing at each node a precomputed configuration table corresponding to each of a plurality of possible network topologies (i.e. network configurations) which result from a ' , ! . ' , ~ ~ .
.... : . .. .. ' . ` ... ' ' ' 2()34651 plurality of possible failure events. After a specific failure event occurs, the network is selectively flooded with messages so that each of the nodes is informed as to the specific existing topology of the network which results from the specific failure event. The nodes are then reconfigured in accordance with the specific prestored configuration tables which correspond to the specific existing network topology.
In short, the protocol of the present invention combines a system which enables the network nodes to agree on the network topology in the event of a failure with a system for reconfiguring the network by loading precomputed configuration tables at the nodes to maintain the call-carrying capacity of the network.
The problem of causing nodes in a network to agree on the network topology has previously been studied (see, e.g., J.M.
Spinelli and R.G. Gallager, "Event Driven Topology ~roadcast Without Sequence Numbers", IEEE Trans. on Commun., Vol. 37, pp.
468-474, 1989: R. Perlman, "Fault-tolerant Broadcast of Routing Informatlon," ComDut. Networks, Vol. 7, pp. 395-405, 1983: J.M.
McQuillan, I. Richor, and E.C. Rosen, "The New Routing Algorithm for the ARPANET,~ IEEE Trans. Commun., Vol. COM-28, pp. 711-719, 1980; Y. Afek, B. Awerbuch, and E. Gafni, "Applying Static Network Protocols to Dynamic Networks," In Proc. 28th IEEE SYm~.
on Foundations of Com~uter Science, pp. 358-370, October 1987).
Similarly, configuration tables for digital cross-connect nodes have heretofore been designed. However, no one has heretofore - ~ . 1.
. ~. . - , .

, .
... .. ..
, 2034651.

provided a distributed protocol for maintaining the call-carrying capacity of a network in the event of a failure, which distributed protocol utilizes a system for enabling the network nodes to agree on a network topology in combination with the loading at each node of a particular precomputed configuration table depending on the particular failure to reconfigure the network.
One alternative approach to network survivability involves the designing of low cost multiply connected networks (see e.g., T.H. Wu, D.J. Xolar, and R.H. Cardwell, "Survivable Network Architecture for Broadband Fiber Optic Networks: Model and Performance Comparisons," lEEE J. Liahtwave Tech., Vol. 6, pp.
1698-1709, 1988). Fault tolerance i9 improved in these networks because there are alternative links and a protection switching strategy to make use of these alternative links. The reconfiguration protocol of the present invention is complementary to the~e techniques in that both can be used in the same network.
Other distributed approaches to network survivability (see, e.g, W.D. Grover, "The Self-Healing Network: A Fast Distributed Restoration Technique for Networks Using Digital Crossconnect Machines," In IEEE/IEICE Global Telecomm. Conf., pp. 28.2.1-28.2.6, Tokyo, December 1987; C. Han Yang and S. Hasegawa, "FITNESS: Failure Immunization Technology for Network Service Survivability," In IEEE Global Telecomm~ Conf., pp. 47-3.1-47.3.6, Hollywood, FL, November/December 1988) do not make use of precomputed configuration tables. These approaches seek to devise a reconfiguration plan in real time after a failure event - . ., ~ . . :

..

has occurred. These techniques have several shortcomings. The plans that are devised deviate quite a bit from the optimal. In addition, these techniaues require a large amount of activity in the critical interval from a failure event to the reconfiguration of the network, thus delaying the restoration of service.
The reconfLguration protocol of the pre~ent invention overcomes these weaknesses of the prior art survivability techniques by utilizing precomputed and therefore superior configuration tables rather than a sub-optional reconfiguration plan developed in real time. In addition, by utilizing precomputed configuration tables, the present invention significantly reduces the amount of computation required in the critical interval between failure and network reconfiguration.
~hus, in the present invention, an increase in memory lS reauirements at the nodes is traded of~ for a reduction in the amount of computation that must be done in the critical interval after the occurrence of a failure event.
After a component of the trunk network fails, physical repairs can tak- s-veral hours or more. In contra~t, the protocol of the present invention can reconfiaure a trunk network in tens of milliseconds. The role of the inventive protocol is to provide the best possible service in the interval from a failure event to the return to normal operation.

Brief Descri~tion of the Drawina FIG 1 schematically illustrates an example of a trunk network comprisina a plurality of reconfigurable cross-connect nodes interconnected by links.

s . . .
;
.

FIG 2 is a chart which defines a set of possible logical connections for the network of FIG 1.
FIG 3 shows the network of FIG 1 including the logical connections defined by the chart of FIG 2.
FIG 4 shows the configuration tables used at the nodes to obtain the logical connections of FIGs 2 and 3.
FIGs 5, 7, and 9A show configuration tables for particular link failures in the network of FIGs 1 and 3.
FIGs 6, 8, and 9B show how the logical connections of FIG 3 10 are rerouted using the configuration tables of FIGs 5, 7, and 9A, respectively.
FIGs 10-42 illustrate the execution of a distributed protocol for maintaining the call carryihg capacity of the network in the event of a failure, in accordance with an 15 illustrative embodiment of the present invention.

Detailed Descri~tion of the InventiQn FIG 1 schematic~lly illustrates a telecommunication network 10. The network 10 comprise~ a plurality of reconfigurable r 20 cross-connect nodes A, B, C, D. The nodes are interconnected by the physical links Q-1,2,3,4,5. In FIG 1, the capacity of each link i9 noted. For example, the capacity of the link Q=5 is three channels. The network 10 also includes the central offices W,X,Y,Z. The central offices W,X,Y, and Z are connected to the 25 corresponding nodes A,B,C, and D, by the links w,x,y, and z, respectively.

.; ,, :
, : :
.:.: :: :- :

Z03465~
The network lO is used to set up a plurality o~ logical connections between pairs of central offices. Each logical connection starts at one central office, passes through one or more nodes and physical links and terminates at a second central S office. Typically, each logical connection can be routed over a variety of poasible paths in the network. In general, the number of logical connections passing through any physical link should not exceQd the capacity (i.e. the number of channels) in the physical link. In the event of a network failure such as the failurQ of one or more links, the nodes A,B,C, and D are reconfigured to preservQ as many of thQ logical connections as possible. ;,~
Thus, in the network lO there are two layers: a physical layer which comprises the nodes and the links which connect them, and a logical layer which comprises the logical connections routed through the physical network. A logical network (i.e. a particular s-t of logical connections) 1~ e~tablishod by the nod~s by the way they interconnect the channels on the physical links incident thereto. The pattern of cross connQctions at each node iB stored in a configuration table. A particular logical network is established when a consistent set in configuration tableg i8 loaded at all of the nodes. Generally, there are many possible sets of configuration tables which can be utilized to define the same logical network.
FIG 2 is a chart which defines a set of logical connections which may be set up in the network 10. In the chart of Figure 2, , .

-, :

Z0346S~
an integer such as "1" indicates the presence of a particularnumber of logical connections between a specific pair of central offices and a "o" indicates the absence of such a logical connection. For example, the first line of the chart of FIG 2 indicates that there is one logical connection between the central offices W,Z but no logical connection between the central offices W,Y.
The network 10 of FIG 1 is shown again in FIG 3. In FIG 3, the logical connections defined by the chart of FIG 2 are also schematically illustratQd. In FIG 3, the logical connection between W and Z is labeled 11 and the logical connection between Z and X i~ labeled 12. The logical connection from w to X is labeled 13, the logical connection from Z to Y is labeled 14, and the logical connection from X to Y is labeled 15.
As indicated above, each node includes a configuration table which define~ how the phy~ical links incident thereto are interconnected to form a particular sat o~ logical connections.
FIG 4 shows the configuration tables for the nodes A, B, C and D
which are used to form the logical connections of FIGs 2 and 3.
The configuration tables of FIG 4 are based on the assumption that all network components including all the links and all the nodes are operating properly. The configuration tables of FIG 4 may be read as follows. Each line of a configuration table represents a connection between a channel o~ one physical link and a channel of another physical link. For example, the top line of the table at node A defines a connection between link w, - - . -: . . . . .
.~ - ,, , , ~ ,.
. ~

~- :
, - . . ~ : ;.
.

203465~
channel 1, and link ~=2, channel 1. As shown in FIGs 1 and 3, link w has a capacity of two channels and link 2-2 has a capacity of two channels. The second line of the table at node A defines a connection between link w, channel 2, and link ~-1, channel 1.
Similarly, the top line of the table at node D defines a connection between link ~1, channel 1, and link z, channel 2.
In addition to the above-described con~$guration tables which are loaded at the nodes A,B,C, and D, certain information ^;`
i9 re~uired at the central offices. Each central office stores the channel on which each of its logical connectiono terminates.
For example, the central office W stores the information that the logical connection to the central office X l-aves on channel 1 of link w and that the logical connection to the central office Z
leaves on channel 2 of link w. This information is not changed aftor a failur- or reconfigurat~on.
A~ indicated above, the present inventlon is a scheme for reconfiguring a physical network to maintain a certain set of logical connections in the event of a failure event comprising the failure of one or more physical components. The scheme has two parts, first the network i5 selectively flooded with messages so that all of the nodes agree on the specific failure event which has occurred (i.e. the nodes agree on the specific existing network topology), and then each node is reconfigured by installing a specific precomputed configuration table corresponding to the specific failure event.

: :

;, 2034651.

To carry out the inventive method it is necessary to identify those failures for which configuration tables will be computed (these are called covered failures) For each covered failure, each node stores a precomputed configuration table The decision as to which failures should be covered i~ an engineering tradeoff The desire for as much coverage as possible must be balanced against the cost of computing and storing a large number of configuration tables One possibility is to provide precomputed configuration tables at the nodes for all possible single link failures Then, if a ingl- link failur- occur~, th- node- ar- r-configured in accordance with th- specific precomputed configuration tables for the specific single link failure If the network has a failure which is not a single like failure, e g , a multi-link failure, then all the nodes should use a consistent rule to select particular configuration tables from thos- that ar- stored at the node~ One possible rul- is to install the configuration table that corresponds to the highest numbered link that is down, assuming a fix-d numb-ring of all the links in the network In this cas-, the t-chnigue of the pre~ent invention works at least as well and generally better than doing nothing An alternative embodiment of the invention involves the computation and storage of configuration tables for all possible single link failures and the most probable multi-link failures An example of a likely multi-link failure is the loss of all optical fiber links which run in a single conduit or the loss of ., .. , . . .:, . :

20346S~
all nodes and links in a single building The failure oS any node is represented by the failure of all links incident to that node A further alternative embodiment of the invention involves the computation and storage of configuration tables for all possible single link failures and all possible multiple link failures In this case, the failure of any combination of links is covered as i9 th- failure of any combination of nodes The tradeoff of course is that a large number of configuration tables needs to be computed for this embodiment and a larg- amount of gtorage i9 reguired for the configuration tables at each node Algorithms such as linear programming algorith~s for computing configuration tables are disclosed in T C Hu, "Combinatorial Algorithms", Addison-Wesley, Reading, MA, 1982, pages 82-83 A collection of configuration tables is constructed as follow~ First, configuration tables are provided under the assumption that all componQnts (i - physical links and nodes) are operating correctly Then conflguration table~ are produced for each cov-r-d failure event For any covered failure event, the logical conn-ctions unaffected by the failure are left undisturbed Spare capacity on the surviving links is used to - find n-w routes for those logical connections which have been disconnected by the failure Each set of configuration tables (one set for the normal case and one set for each covered failure) may be obtained using an algorithm of the type mentioned above ...
. :, . ~ , It is useful to present a rule of thumb for the amount of spare capacity required on each link to ensure 100% recovery from any single link failure The required spare capacity as a percent of link utilization is 100/c-1, where the connectivity c is defined to be the minimum over all pairs of nodes of the number of edge-disjoint paths between those two nodes If c=l, then it is not possible to ensurQ 100% recovery from any single line failure Some configuration tables for the network lo of FIGs 1 and 3 arQ now considered FIG 5 shows the precomputed configuration tables which are loaded at the nodes A, B, C, and D when the physical link Q-1 of the network 10 fails FIG 6 shows how the configuration tables of FIG S reconfigure the network to reroute the logical connection 11 between W and Z which previously utilized the failed lin~ QY1. The logical connection 12 between Z and X, th- logical connection 13 between W and X, the logical connection 14 b-tween Z and Y and th- logical connection 15 between X and Y remain unchanged by the reconfiguration since these logical conn-ctions do not utilize the link Q-1. Similarly, FIG 7 shows the precomputed configuration tables which are loaded at the nodes A, B, C, and D to reconfigure the network when the failure event is the failure of the physical link Q-3. The rerouting of the logical connection lS between X and Y which previously utilized the link Q~3 is shown in FIG 8 The logical connections 11, 12, 13, and 14 are not rerouted In addition, FIG 9A shows the precomputed configuration tables which are - . :

.. . .

X034651 r loaded at the nodes A, B, C, and D when links ~1 and ~-3 both fail. The rerouting of the logical connections 11 and 15 $s shown in FIG 9B. In all of the three illustrative failure events described above, spare capacity on the surviving links is used to S ma$ntain the set of logical connect$ons.
In accordance with the present $nvention, in order for the nodes to load the proper configuration tables when there is a failure event, the network $g selectively flooded w$th messages so that all of the nodes are informed of the fa$1ure event and have a consist-nt view of the Qxisting network topology which results from the failure event. All the nodes are then able to load appropriate configuration tables so that the network $s reconfigured consistently.
To accompllsh this, each node continuously executes the following protocol:

- . , ., i, -., ,. . ,: - .. ,, . ~ .

Initialization: 20346Sl let n be the label of this node for I - l to L
LINXS(I) ~ UP
end for H~in Driving Loop:
whil- true do for d - l to DEGREE~n) if LINKS(MAP(n,d)) - UP then if not WORRING (d) then call UPDATE(MAP(n,d)) else m - RECEIVE(d) i f m f NULL then lS call UPDATE (m) end if end if end if nd for end whil-Subrout~ne:
procedurQ: UPDATE(b) % b is the lab-l of a link that is down if LINXS(b) - UP then LINKS(b) ~ DOWN
for d - 1 to DEGREE(n) SEND(d,b) end for t - SELECT(LINKS) install configuration table determined by t end if end procedure ,. .
.. .: ..... . ...

203465~
The protocol makes use of one array, three ~unctions and three primitivo procedures There are L link~ in the trunk network Each link is qiven a label ~, ls~sL Each node running the protocol maintains an S L-element array LINXS, each of whose element~ LINKS(~) i9 either - UP or DOWN The purpose of this array is to enable each node to track the state of the network Initially, all elements of a LINXS array ar- UP Whilo the network topology is changing, the LINXS array may give an inaccurat- or outdatod pictur- of the network Howevor, when the topology of the network stops changing, the protocol cau~es all the LINKS arrays of all the nodes to converge to the ~ame value One function utilized in the protocol i~ DEGREE(n) For any node n, the value of DEGREE~n) is the number of links incident to node n A second ~unction utilized in the protocol i5 MAP(n,d) The value of MAP(n,d) i8 th- lab-l I of the d-th link incident to node n, wher- th- link~ incident to the node n have been given some arbitrary local identification number d,lsdsDEGREE(n) Thus, the MAP function serve~ to convert between a local iabeling of a link (i e , d) incident to a node and the global network-wide labeling of the link (i e , ~) For example, in FIG 10, MAP(n-C,d~ 3 and MAP(n~C,d~2)~4 A third function used in the protocol is SELECT(LINKS) As indicated above, the LINKS array i~ used to represent the state of the network after a failure event The SELECT function maps . . .
, . . .

" 20~465~
each possible state of the LINKS array (i.e. each possible failure event) into the precomputed configuration table which should be used if that failure event happens. If LINKS-F is a covered failure, SELECT(F) should be the configuration table for failure F. If LINKS~F is not a covered ~ailure, SELECT(F) should be the configuration table for somQ failure F', where F~ is a maximal subset of F for which a precomputed configuration table exists.
The primitive procedures used in the protocol are SEND, RECEIVE and WOR~ING. At a node n, these procedures operate as follow~. The procedure SEND(d,b) sends the message b via the link d incident to node n, i.e. the message b is placed in the buffer at the node at the far end of link d. The message b is the label of a link that is down. The message b is lost if the link d fails before the message b is delivered.
If the link d is up, the procedure RECEIVE(d) returns the fir~t mes~ag- in a buf~-r associated with the link d or NULL if there is no messagQ.
The procedur- WORKING(d) returns logical true if d is up, otherwis- it returns logical false. Thus, the procedure WORKING(d) i8 used to determine whether the link d is up or down.
In oth-r words WORKING(d) i~ used to determine the true physical state of the link d not merely the corresponding entry in the LINKS array. This may be accomplished using hardware which forms part of the digital cross-connect switch at the node.

i ~- ~ ' :

20~4651 $he operation of the protocol may be described as follows.
At each node, a LINKS array is maintained indicating the status of all the links Q in the network. At each node, a plurality of predetermined configuration tables are stored corresponding to a s plurality of possible failure events.
At each node, all~of the links associated with the node which are indicated by the LINKS array to be up are tested using the WORXING procedure.
I~, as a result of the testing step at a node, a specific link i~ determined to be down: (1) the LINKS array is updated, (2) the SEND procedure is used to send a me~6age on the links incident to the node indicating that the specific link is down, and (3) the SELECT function is used to choo~e a configuration table for reconfiguring the node. These three step~ are carried out in an UPDATE subroutine of the protocol which i8 called when the WORKING procedure indicates th- specific link is down.
If a specific link is determined as a result of the WORKING
procedure to be up, the RECEIVE procedure is used to determine if a message has been received via the speci~ic link indicating that another link in the network is down. If a message has been received via th- specific link indicating that another link in the network i~ down, and the LINKS array does not already indicate that the other link is down, then: (1) update the LINKS
array, (2) use the SEND procedure to send a message on all links incident to the node indicating that the other link is down, and (3) use the SELECT function to choose a configuration table . : , ~ , , :. .. .
.
.
- ~ . ; : ,.. ,; ~ . . . . .
- ; , , .. , , .. ~
. ~ ,- "~
.. .. . ..

;~03A651.
corresponding to the updated LINXS array for reconfiguring the node The illustrative reconfiguration protocol set ~orth above is continuously ex-cuted by each of the nodes Briefly stated, each S node examines each of its incident links in turn, skipping those link~ already known to be down When a node looks at a link, two Qvents of intere~t are directly detecting that a link i~ down or recoiving a me~sage on the l$nk that another link is down In eith~r cas-, th- node begins flooding th- network with the information (if it is n-w) Thus, an important mechanism used in the protocol i~ an intelligent flooding of th- network with an announcenent of each link failure Th- numb-r of m-~sages sent per failure is limited becau~e each node forwards information about each failure at most once Thus, in the worst case, each linX failure causes one m-ssag- to be ~-nt in each direction on each link in the network Th- following i~ an exampl- of an execution of the foregoing protocol Th- protocol works correctly no ma~ter how the nodes interleave their ~t-ps In the following example, an int-rl-aving i- ~hown wherein th- nod-~ take their oteps in round robin ord-r, i - , A,B,C,D; A,B,C,D At each step, each node make~ one full pas~ through the FOR-LOOP in the main driving loop of the protocol, examining one of it~ incident links and taking any required action 2s The xample of the execution of the protocol in the network 10 of FIG 1 is explained in connection with FIG 11 to FIG 42 - . - .,- - . ..

~ . . . - . .
. . . ~ , . .: .
,: . .. . -. ,.
- .: . . .
, . . . ,, i -- . , . . - .

Z03465~.
FIG 10 is a key which enables the reader to understand FIG 11 to FIG 10 shows the network 10 of FIG 1 in greater detail As indicated previously, the network 10 comprises the nodes A,B,C, and D The nodes A,B,C,D are interconnected by the links ~1,2,3,4,S Each nod- includes a LINXS array which ind$cates the up/down status of each of the links ~1,2,3,4,5 Nhen a particular link identifying number is blackened in the LINKS
array of a particular node, it means that the particular node has concluded that the corresponding link is down FIG 10 al~o show~ a configuration table currently loaded at each nod-, ~om- of the various po~iblo configuration tables being listed at th- left-hand ~ide of FIG 10 In addition, FIG 10 w hematically illustratQs the format of a message b-ing sent via the link ~-1 from node A to node D, which mes~age indicateJ the failure of the link ~-3 At th- l-ft-hand side Or FIG 10, the DEGREE(n) function (i e number of incident links) i8 evaluated for each of the nodes, and th- MAP (n,d) function is evaluated to conv-rt the local link lab-ls d~l,2,3, into the global network link labels Q-1,2,3,4,5 ~or each node When one of the local llnk labels is circled in the following FIGs 11 to 42, it indicates that this is the next link to be scanned by the corresponding node FIG 11 shows the initial state of the network with all links 2S up, all entries in all LINKS arrays UP, and with all of the nodes having their normal case configuration table As indicated ,~ .- -. : ~; ,, . . :

203465~
previously, in this example, the nodes scan their links in aninterleaved fashion As indicated in Figure 11, the next node to take a step (i e scan a link) is node n~A The link to be scanned at node n-A is indicated by the circle to be the link d=l ~local labeling scheme), corresponding to the link 2~1 (global labeling scheme) Since the entry for this link in the LINKS
- array i9 UP, the node A proceeds to test this link using the proc-dure WORXING Since th- result Or thi~ test is that the link i~ up, th- node A attempts to receive a meosage on this link using th- proc-dure RECEIVE Sinco no mo~sagQ i9 available, the nod- A tako~ no action The stat- of th- n-twork after node A
tak-- it~ top is shown in FIG 12 FIG 12 also indicates that th- next link to be scanned by the node A will be d-2, corresponding to ~2, but only after all the other nodes have scanned a link As indicat-d in FIG 12, link ~-3 is about to fail The state of th- n-twork a~ter link 1-3 fail~ i~ shown in FIG 13 As indicat-d in FIG 13, th- next nod- to tak- a step is node 8 Tho nod- B ~can~ its link d-l corrQsponding to ~-2 Since the entry for thl~ link in the LINXS array is UP, the node B proceeds to test this link using the procedure WORXING Since the result of thi~ test is that the link i8 Up, the node B attempts to receive a message on this link using the procedure RECEIVE
Since no message is available, the node B takes no action ~he state of the network after node 8 takes its step is shown in - : : .

~: "
, .. .. . .. ..

As indicated in FIG 14, the next node to take a step is node C which ~cans its link d~l corresponding to Q~3 Since the entry for this link in the LINKS array is UP, the node C proceed~ to test this link using the procedure wORKING Since the result of this test is that the link has failed, the node C takes the required actions These actions are æhown in FIG 15 In particular, the LINKS array at node C has been updated to indicate the failure of link ~-3. A messag- is sent on the linXs Q~4 and R-3 incident to the node C indicating the link ~-3 hag failed ~The message sent via link Q-3 i~ lost) In addition, a configuration tabl- corresponding to the failed lin~ ~-3 ig loaded at the nod- n-C
As indicated in FIG 15, the next link to take a~ ~tep is node D, which scans its link d-l corresponding to Q-l Since the entry for this link in the LINKS array is UP, the node D proceeds to te~t this link using th- proc-dur- WORXING Sinc- the res~lt of thi~ t-st i9 that the link i8 up, the node D attempts to receive a message on this link uslng the procedure RECEIVE
Since no messag- i- available, the node D take~ no action The state of the n-twork after node D takes its step is shown in FIG
16 As indicated in FIG 16, after node D takes its step, the link Q-l fails The state of the network after the link Q-l fails is shown in FIG 17 The next node to take a step is node A As shown in FIG 17, the next link to be scanned by the node A i5 its link d~2, corresponding to Q~2 Since the entry for this link in the LINKS

, ~ , . . . .

: .

20346Sl.

array is UP, the node A proceeds to test this link using the procedure WORXING. Since the result of this test i5 that the link is up, the node A attempts to receive a message on this llnk using the procedure RECEIVE. Since no message is available, the node A takes no action. The state of the network after the step by node A is shown in FIG 18.
As indicated in FIG 18, the next node to tak- a step is node B, which scans its link d-2 corresponding to ~-3. Since the entry for this link in the LINKS array is UP, ths node B proceeds to test thi~ link using the procedure WORXING. Since the result of thi~ test is that the link has failed, the node ~ taXes the reguired actions. As shown in FIG 19, the node B update~ its LINXS array to indicate that the link Q-3 is down, it loads a new configuration table, and sends a messag- on all of its incident links that the link ~-3 is down, the message sent on the link ~=3 being lost.
As indicated in FIG 19, the next node to take a step is node C, which scans its link d-2 corresponding to ~4. Since the entry for this link in the LINXS array i~ UP, the node C proceeds to test this link using the procsdure WORKING. Since the result of this test is that the link is up, the node C attempts to receive a message on this link using the procedure RECEIvE.
Since no message is available, the node C takes no action. (Note, the outgoing message on the link 2~3 has been sent in a previous 2S step illustrated in FIG 15). The state o~ the network after node C takes its step is shown in FIG 20.

.. ... .

203465~

As indicat-d in FIG 20, the next node to take a stQp is node D which scans $ts link d-2 corresponding to ~-5 Since the entry ~or this link in the LINKS array is UP, tho node D proceeds to test this link using the procedure WORXING Sinc- th- result of this test is that the link is up, the nod- D attempts to receive a messag- on this link using th- procedurQ RECEIVE This results in node D r-ceiving a me~ag- indicating that the link ~-3 is down Because this information is new to node D, th- node D
takes the following steps shown in FIG 21 The node D updates it~ LINRS array to indicate that the link ~-3 is down, loads a n-w con~iguration tabl-, and ~ends a m-ssag- on all its incident links indicating that th link ~-3 i8 DOWN, the me~sage which is s-nt on link ~-1 bsing lo~t A~ indicated in FIG 21, the next node to take a step is node A which ~cans its link d~l corresponding to i-l The node A
det-ct~ that thi- link i- down using the WORXING procedure and take~ the r-guir-d action- hown in FIG 22 In particular, the nod- A updato~ it- LINXS array, loads a n-w con~iguration table, and send- a m -~ag- on all its incid-nt link~ indicating that the link ~ down, th- mes~ago s-nt via link ~-1 b-ing lost AD indicated in FIG 22, the n-xt node to take a step is the node B, which ~cans its link d-3 corr-sponding to ~-5 The node ~ detects that th- link R~S ig Up using the WORXING procedure and receives a message that the link 2-3 is down using the RECEIVE
procedure However, b-cause the node B already knows that the link ~3 is down (this is indicated in its LINXS array), the node , . . ~. . .................. . ., .... ~ . . ~.
. . . . -. . . . . .

. . . .. .. .

~03A65~

B takes no action The state of the network after the step by node B i~ shown in FIG 23 As indicated in FIG 23, the next node to take a step is node C, which scans its link d-l corresponding to ~-3 Since the LINKS array at node C already indicates that this link is down, no action is taken by node C The state Or the net~orX after nod- C's step is shown in FIG 24 As indicated in FIG 24, the next step i9 taken by node D
which scans its link d-3 corresponding to ~-4 The node D finds that the link ~-4 is up and receives a message indicating that the link ~-3 is down Since the LINKS array at the node D
already indicates that the link ~-3 iB down, no action i9 taken Th- state of the network after node D's step i9 shown in FIG 25 As indicated in FIG 25, the next node to take a step is node A, which scans it~ link d~2 corresponding to ~-2 The node A
determines that the link Q-2 is up and receives a message indicating that the link ~-3 has ~ailed Since this information is new to nod- A, a~ shown in FIG 26, the node A updates its LINXS array, load~ a new configuration table which correspond to the failure o~ links l-I and Q-3 (see FIGs 9A and 9B) and sends a message on its incident links indicating that ~-3 is down The messag- that is sent via ~1 is lost As indicated in FIG 26, the next step i8 taken by node B
which scans its link d~l corresponding to ~-2 The node B
determines that this link is up and also receives the first message transmitted to it via 2-2, which message indicates that , ~ . .:. : -.
- . .. ..
, . ,:

20346Sl.
~-1 is down. This information is new to node B, as it is not yet indicated in the LINKS array at node ~. Accordingly, node B
takes the following steps indicated in FIG 27. Node B updates its LINKS array, loads a new configuration table as determined by the updated LINXS array, and sends a message that ~-1 is down via all of its incident links, the message that is sent via ~=3 being lost.
As indicated in FIG 27, node C takes the next step by scanning its link d~2 corresponding to ~-4. Node C determines that this link is up and receives a message indicating that the link ~-3 is down. Because this is already indicated at the LINKS
array at node C, no further action i9 taken. The state of the network after node C's step is indicated in FIG 28.
As indicated in FIG 28, node D take~ the next step by scanning its link d-l, corresponding to ~-1. The node D
determines for the first time that the link R-l is down. Thus, as shown in FIG 29, node D update~ its LINXS array, loads a new configuration tabl-, and send~ a me~sage on all it~ incident link~ indicating that ~ down, wherein the me~sage that is tran~mitted via the link ~-1 is lost.
As indicated in FIG 29, the next node to take a step is node A, which ~cans it~ link d-l corresponding to ~1. Since the node A already knows that this link is down, the node A takes no action. The state of the network after node A's step is shown in FIG 30.

:;. ~ . ............... ~,. . - : .
- - . . ... . . . . . ... ... . . . . .

;~:034651.
As indicated in FIG 30, the next node to take a step is node B, which scans its link d~2 corresponding to ~-3. The node B
already knows this link is down as indicated by its LINXS array so it takes no action. The state of the network after node B's step is shown in FIG 31.
A~ indicated in FIG 31, the next step is taken by node C
which scans its link d~l corresponding to ~-3. Since the node c already knows this link is down, no action is taken. ~he state of the network after node C's step is shown in FIG 32.
A~ indicated in FIG 32, tho next step is taken by node D
which scans its link d-2 correaponding to Q-5. The node D
determine~ that the link ~-5 ls up and receives a message indicating that the link R-l ig down. Since this information is already known to the node D, the node D takes no action. The state of the network a~ter node D's step i~ shown in FIG 33.
As indicated in FIG 33, the next step 18 taken by node A
which scans its link d-2 corre~ponding to ~-2. The node A
determineo that the link ~-2 lo up and r-c-ives a mQssage that the link Q-l is down. However, this is not new information for the node A, so no action is taken. The state of the network after node A's step is shown in FIG 34.
As indicated in FIG 34, the next node to take a step is node B, which scans its link d~3 corresponding to Q-5. The node B
determines that the link ~-5 is up and receives a message that the link ~31 is down. Since this is not new information for the , ~ ,,., , ~ -- , : . . , -.,. , . :: :

20346S~

nodo B, no action is taken The state of the network after nod~
B's st-p is shown in FIG 35 As indicated in FIG 35, the next node to take a step is node C which scans its link d-2 corresponding to ~-4 Th- node C
determines that the link Q~4 is up and receives a me6sagQ
indicating that the link R - 1 iS down Since this is new information for th- nod- C, a~ shown in FIG 36, th- node c updat-- it- LINKS array, loads a new con~iguration table as d-t-rmin-d by th- updat-d LINKS array, and sendJ a m-ssage on its incid-nt link~ indicating that ~-1 is down, th- m -sage that is ~ent via ~-3 being lost A~ indicated in FIG 36, th- n-xt node to tak a step is node D which ~can~ its link d-3 corr--ponding to ~-4 Th- node D
d-termin-~ that the link ~-4 is up and recQives a m s-ag- that th- llnk ~-1 is down Sinc- th- node D alr-ady know- this, no action i~ tak-n Th- ~tat- of th- notwoFk a~t-r nod- D's step is ohown in FIG 37 As indicat-d by FIG 37, th- n-xt stQp is taken by node A
which scans it~ link d-l corr-~ponding to l-l Since thls link is down and this i~ already known by the node A, tho node A takes no furth-r action The state Or the network after node A's step is shown in FIG 38 As indicated in FIG 38, the next step i9 taken by node B
which scans its link d-l corresponding to ~2 The node B
determines that the link ~-2 is up and receives a message indicating that the link Q-3 is down Since this is already , ~
:

.

, . . . . . , ., , : . .
.:: . . .
. . . :~

2034651.

indicated in the LINKS array at node B, node B takes no action.
The state of the network after node B's step is shown in FIG 39.
As indicated in FIG 39, the next step is taken by node C
which scans its link d-l corresponding to e~3. Since the node c already knows this link is down no action is taken. The state of the network after node C's step is shown in FIG 40.
As indicated in FIG 40, the next step is taken by node D
which scans its link d-l corrQsponding to l-l. Since the node D
already knows this link is down, no action is taken. The state of the network after node D's step is shown in FIG 41.
As indicated in FIG 41 th- next node to take a step is node A which scans its link d-2 corresponding to link ~-2. The node A
dotermin-- this link i~ up and takes no action. Th- state o~ the network after node A's step is shown in FIG 42.
At this point in the example no more links fail, no more mes~ag-s are set and no new configuration tables are loaded.
However, in an actual network, the nodes would continue to execute the protocol, th-reby continuing to ~can incident links for failures or messages.
A~ can be ~een in FIG 42, as a result of the protocol, all of the nodes have converged to a consistent and correct picture of the existing network topology, i.e, that links Q-l and 2~3 are down. In addition, each of the nodes has loaded a configuration table corresponding to this failure event. ~hese con~iguration 2S tables are shown in FIG 9A. The rerouting of the logical .,, . ,, . :

.
.

2034651.

connection as defined by these configuration tables for the failure of ~=1 and ~-3 is shown in FIG 9B.
In short, a method for reconfiguring a telecommunications network comprising a plurality of reconfigurable cross-connect nodes interconnected by links has been disclosed. The method comprises the step of storing at each node a precomputed configuration table corre~ponding to each of a plurality of possible network topologies which result from a plurality of possibl- failure events. After a specific failur- event occurs, the network is flooded with messages so that each of the nodes is informed as to the specific existing topology of th- network which re~ults from the specific failure event. The nodes are reconfigured in accordance with the specific precomputed configuration tables which correspond to the specific existing network topology.
To accomplish the foregoing, each node continually executes a protocol comprising the step~ of:
(a) at oach nod-, ~equentially testing all of the links associated with the node which are indicated by a link status array to be working, (b) if a specific link is determined as a result of tho testing step to be non-working, updating the link status array, sending a message on the links incident to the node that the specific link is non-working, and reconfiguring the node in accordance with a prestored configuration table "

.

2034651.

corresponding to the particular pattern o~ non-working l~nks indicated by the updated link status array, and (c) if a specific link is determined as a result o~
the testing step to be working and a mQssage ha~ been received via the specific link indicating that another link in the network is non-working and the link status array does not already indicate that the other link is non-working, sending a message on the links incident to the node indicating that the other link i~ non-working, updating the link status array at the node, and reconfiguring the node in accordanc- with a prestored con~iguration table corr-~ponding to the particular pattern of non-working links indicated by th- updated link statu~ array Finally, the above-described embodiments of the inventions are intended to be illustrative only Numerous alternative embodim-nts may be d-vlsed by those skilled in th- art without departing from the spirit and scope of the following claims .~^, .. .. . .
i, -: . . ..

Claims (13)

WHAT IS CLAIMED IS:
1. A method for reconfiguring a telecommunications network comprising a plurality of reconfigurable cross-connect nodes interconnected by links when a failure event occurs, said method comprising the steps of storing at each node a precomputed configuration table corresponding to each of a plurality of possible network topologies which result from a plurality of possible failure events, after a specific failure event occurs, flooding the network with messages so that each of the nodes is informed as to the specific existing topology of the network which results from the specific failure event, and reconfiguring the nodes in accordance with the specific precomputed configuration tables which correspond to the specific existing network topology.
2. The method of claim 1 wherein said storing step comprises storing at each node a precomputed configuration table for each possible single link failure.
3. The method of claim 2 wherein, when there is a specific single link failure event, said reconfiguring step comprises reconfiguring the nodes in accordance with the precomputed configuration tables corresponding to the specific single link failure event.
4. The method of claim 2 wherein, when there is a failure event involving more than a single link, said reconfiguring step comprises installing at each node one of said single link failure configuration tables in accordance with a consistent rule followed at all of said nodes.
5. The method of claim 4 wherein all of said links in said network have identification numbers, and wherein said consistent rule followed at all of said nodes comprises installing at each node the single link failure configuration table for the failed link having the highest number.
6. The method of claim 1 wherein said storing step comprises storing at each node a precomputed configuration table for each possible single link failure and a precomputed configuration table for the most probable multiple link failures.
7. The method of claim 1 wherein said network is configured initially so there are particular logical connections established in said network, and wherein said configuration tables are computed to maintain said logical connections when corresponding failure events occur.
8. The method of claim 1 wherein said specific failure event comprises failure of one of said nodes.
9. A protocol carried out at each node in a network comprising a plurality of reconfigurable cross-connect nodes interconnected by links to reconfigure the network in the event of a failure event, said protocol comprising the steps of:

at each node, maintaining an array indicating the status of all of the links in the network, at each node, storing a plurality of predetermined configuration tables corresponding to a plurality of possible patterns of non-working links, at each node, sequentially testing all of the links associated with the node which are indicated by the array to be working, if a specific link is determined as a result of the testing step to be non-working, updating the array, sending a message on all links associated with the node indicating that the specific link is non-working, and reconfiguring the node in accordance with the predetermined configuration table corresponding to the pattern of non-working links indicated by the updated array if the node is not already in this configuration, and if a specific link is determined as a result of the testing step to be working, determining if a message has been received via the specific link indicating that another link in the network is non-working, if a message has been received via the specific link indicating that another link in the network is non-working and the array does not already indicate that the other link is non-working, updating the array, sending a message on all links associated with the node indicating that the other link is non-working, and reconfiguring the node in accordance with the predetermined configuration table corresponding to the pattern of non-working links indicated by the updated array if the node is not already in this configuration.
10. A protocol carried out continuously at each node in a network comprising a plurality of reconfigurable cross-connect nodes interconnected by links, said protocol comprising the steps of (a) at each node, sequentially testing all of the links associated with the node which are indicated by a link status array to be working, (b) if a specific link is determined as a result of said testing step to be non-working, updating the link status array, sending a message on the working links associated with the node that the specific link is non-working, and reconfiguring the node in accordance with a prestored configuration table corresponding to the particular pattern of non-working links indicated by the updated link status array, and (c) if a specific link is determined as a result of the testing step to be working and a message has been received via the specific link indicating that another link in the network is non-working and the link status array does not already indicate that the other link is non-working, sending a message on the working links associated with the node indicating that the other link is non-working, updating the link status array at the node, and reconfiguring the node in accordance with a prestored configuration table corresponding to the particular pattern of non-working links indicated by the updated link status array
11. A method for reconfiguring a telecommunications network to maintain a set of logical connections defined in said network in the event of a failure of a component in said network, said method comprising storing at each node in said network a precomputed configuration table corresponding to each of a plurality of possible network component failures, after a specific failure of one or more network components, flooding the network with messages so that each of the nodes is informed of the specific failure, and reconfiguring the nodes in accordance with the specific prestored configuration tables corresponding to the specific failure to maintain said set of logical connections
12 The method of claim 11 wherein said specific failure comprises the failure of a link of said network
13 The method of claim 11 wherein said specific failure comprises the failure of a node of said network
CA002034651A 1990-03-27 1991-01-21 Distributed protocol for improving the survivability of telecommunications trunk networks Abandoned CA2034651A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/499,881 1990-03-27
US07/499,881 US5093824A (en) 1990-03-27 1990-03-27 Distributed protocol for improving the survivability of telecommunications trunk networks

Publications (1)

Publication Number Publication Date
CA2034651A1 true CA2034651A1 (en) 1991-09-28

Family

ID=23987133

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002034651A Abandoned CA2034651A1 (en) 1990-03-27 1991-01-21 Distributed protocol for improving the survivability of telecommunications trunk networks

Country Status (5)

Country Link
US (1) US5093824A (en)
EP (1) EP0525121A4 (en)
JP (1) JPH05506976A (en)
CA (1) CA2034651A1 (en)
WO (1) WO1991015066A1 (en)

Families Citing this family (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2784080B2 (en) 1990-05-09 1998-08-06 富士通株式会社 Ring network, fault recovery method therefor, and node used in ring network
WO1992022971A1 (en) * 1991-06-18 1992-12-23 Fujitsu Limited Method for determining alternative route
FR2691312B1 (en) * 1992-05-18 1994-07-29 Telecommunications Sa METHOD FOR DECENTRALIZED MANAGEMENT OF THE ROUTING OF COMMUNICATIONS IN A NETWORK OF PACKET SWITCHES.
GB2271041B (en) * 1992-09-23 1996-03-20 Netcomm Ltd Data network switch
US5574860A (en) * 1993-03-11 1996-11-12 Digital Equipment Corporation Method of neighbor discovery over a multiaccess nonbroadcast medium
JPH0758744A (en) * 1993-08-20 1995-03-03 Fujitsu Ltd Call storage system wherein emergency call and normal call are mixedly present
JP3095314B2 (en) * 1993-08-31 2000-10-03 株式会社日立製作所 Path switching method
US5412652A (en) * 1993-09-24 1995-05-02 Nec America, Inc. Sonet ring subnetwork management method
JPH07143140A (en) * 1993-11-15 1995-06-02 Fujitsu Ltd Universal link configurator
US5495471A (en) * 1994-03-09 1996-02-27 Mci Communications Corporation System and method for restoring a telecommunications network based on a two prong approach
US5544150A (en) * 1994-04-07 1996-08-06 Medialink Technologies Corporation Method and apparatus for determining and indicating network integrity
JPH0897841A (en) * 1994-09-29 1996-04-12 Hitachi Ltd Method for controlling path changeover transmitter and the path changeover transmitter
US5535213A (en) * 1994-12-14 1996-07-09 International Business Machines Corporation Ring configurator for system interconnection using fully covered rings
US5802278A (en) * 1995-05-10 1998-09-01 3Com Corporation Bridge/router architecture for high performance scalable networking
GB2344499B (en) * 1995-05-10 2000-10-18 3Com Corp A method for transferring data on a communication medium from a source processor to a destination processor, the data including messages of a first transmit
US5592622A (en) * 1995-05-10 1997-01-07 3Com Corporation Network intermediate system with message passing architecture
US5623481A (en) * 1995-06-07 1997-04-22 Russ; Will Automated path verification for SHN-based restoration
US5646936A (en) * 1995-06-22 1997-07-08 Mci Corporation Knowledge based path set up and spare capacity assignment for distributed network restoration
US5590119A (en) * 1995-08-28 1996-12-31 Mci Communications Corporation Deterministic selection of an optimal restoration route in a telecommunications network
US5732086A (en) * 1995-09-21 1998-03-24 International Business Machines Corporation System and method for determining the topology of a reconfigurable multi-nodal network
CA2161847A1 (en) * 1995-10-31 1997-05-01 Wayne D. Grover Method for preconfiguring a network to withstand anticipated failures
US5590120A (en) * 1995-10-31 1996-12-31 Cabletron Systems, Inc. Port-link configuration tracking method and apparatus
US5815490A (en) * 1995-11-20 1998-09-29 Nec America, Inc. SDH ring high order path management
JP2943677B2 (en) * 1995-12-06 1999-08-30 日本電気株式会社 Line detour control system for asynchronous transfer mode communication
US6108530A (en) 1995-12-14 2000-08-22 Lucent Technologies Inc. System and method for transmitting a displayable message between short message entities in more than one data package
JP3432664B2 (en) * 1996-02-14 2003-08-04 富士通株式会社 Communication node, failure recovery method, and communication network
JP3001410B2 (en) * 1996-03-28 2000-01-24 日本電気テレコムシステム株式会社 Automatic detour routing method
US5734811A (en) * 1996-06-26 1998-03-31 Mci Corporation Segment substitution/swap for network restoration pre-plans
US5987011A (en) * 1996-08-30 1999-11-16 Chai-Keong Toh Routing method for Ad-Hoc mobile networks
ID22055A (en) * 1996-12-06 1999-08-26 Bell Communications Res CROSS-NETWORKS OF RINGS FOR OPTICAL COMMUNICATION NETWORKS WITH A LOT OF RELIABLE WAVES
US5883881A (en) * 1996-12-30 1999-03-16 Mci Communications Corporation Method for selecting preferred nodes for distributed network restoration
US5999286A (en) * 1997-01-09 1999-12-07 Alcatel Method and system for restoring a distributed telecommunications network
US6041049A (en) * 1997-05-06 2000-03-21 International Business Machines Corporation Method and apparatus for determining a routing table for each node in a distributed nodal system
US5963448A (en) * 1997-06-18 1999-10-05 Allen-Bradley Company, Llc Industrial controller having redundancy and using connected messaging and connection identifiers to enable rapid switchover without requiring new connections to be opened or closed at switchover
US6421349B1 (en) * 1997-07-11 2002-07-16 Telecommunications Research Laboratories Distributed preconfiguration of spare capacity in closed paths for network restoration
US6347074B1 (en) * 1997-08-13 2002-02-12 Mci Communications Corporation Centralized method and system for excluding components from a restoral route in a communications network
US5941992A (en) * 1997-08-13 1999-08-24 Mci Communications Corporation Distributed method and system for excluding components from a restoral route in a communications network
US6377543B1 (en) 1997-08-13 2002-04-23 Telecommunications Research Laboratories Path restoration of networks
US6130875A (en) * 1997-10-29 2000-10-10 Lucent Technologies Inc. Hybrid centralized/distributed precomputation of network signal paths
US6151304A (en) * 1997-10-29 2000-11-21 Lucent Technologies Inc. Distributed precomputation of network signal paths with improved performance through parallelization
US6021113A (en) * 1997-10-29 2000-02-01 Lucent Technologies Inc. Distributed precomputation of network signal paths with table-based link capacity control
US6215763B1 (en) 1997-10-29 2001-04-10 Lucent Technologies Inc. Multi-phase process for distributed precomputation of network signal paths
DE69932461T2 (en) 1998-02-16 2007-08-23 Ericsson Ab telecommunications systems
GB2334408B (en) * 1998-02-16 2003-02-26 Marconi Comm Ltd Telecommunications systems
US6414771B2 (en) * 1998-04-27 2002-07-02 Lucent Technologies Inc. Optical transmission system including optical restoration
IL124770A0 (en) * 1998-06-04 1999-01-26 Shunra Software Ltd Apparatus and method for testing network applications
US6643693B1 (en) * 1998-09-15 2003-11-04 Crossroads Systems, Inc. Method and system for managing I/O transmissions in a fibre channel network after a break in communication
US6404734B1 (en) 1998-10-06 2002-06-11 Telecommuncations Research Laboratories Scalable network restoration device
US6631134B1 (en) * 1999-01-15 2003-10-07 Cisco Technology, Inc. Method for allocating bandwidth in an optical network
US6912221B1 (en) 1999-01-15 2005-06-28 Cisco Technology, Inc. Method of providing network services
US6990068B1 (en) 1999-01-15 2006-01-24 Cisco Technology, Inc. Virtual path restoration scheme using fast dynamic mesh restoration in an optical network
US7764596B2 (en) * 2001-05-16 2010-07-27 Cisco Technology, Inc. Method for restoring a virtual path in an optical network using dynamic unicast
US6801496B1 (en) 1999-01-15 2004-10-05 Cisco Technology, Inc. Network addressing scheme for reducing protocol overhead in an optical network
US7428212B2 (en) * 1999-01-15 2008-09-23 Cisco Technology, Inc. Best effort technique for virtual path restoration
US6856627B2 (en) * 1999-01-15 2005-02-15 Cisco Technology, Inc. Method for routing information over a network
US7352692B1 (en) 1999-01-15 2008-04-01 Cisco Technology, Inc. Resource reservation scheme for path restoration in an optical network
US6301254B1 (en) 1999-03-15 2001-10-09 Tellabs Operations, Inc. Virtual path ring protection method and apparatus
US6992978B1 (en) 1999-06-02 2006-01-31 Alcatel Communications, Inc. Method and system for path protection in a communications network
US6826146B1 (en) * 1999-06-02 2004-11-30 At&T Corp. Method for rerouting intra-office digital telecommunications signals
US6657969B1 (en) * 1999-06-29 2003-12-02 Cisco Technology, Inc. Generation of synchronous transport signal data used for network protection operation
US7069320B1 (en) 1999-10-04 2006-06-27 International Business Machines Corporation Reconfiguring a network by utilizing a predetermined length quiescent state
US6549513B1 (en) * 1999-10-12 2003-04-15 Alcatel Method and apparatus for fast distributed restoration of a communication network
US6714537B1 (en) 1999-10-19 2004-03-30 Ciena Corp. Switch fabric architecture and techniques for implementing rapid hitless switchover
US6711409B1 (en) 1999-12-15 2004-03-23 Bbnt Solutions Llc Node belonging to multiple clusters in an ad hoc wireless network
US7167444B1 (en) * 1999-12-29 2007-01-23 At&T Corp. Family ring protection technique
AU2611701A (en) * 1999-12-30 2001-07-16 Computer Associates Think, Inc. System and method for topology based monitoring of networking devices
US6614785B1 (en) 2000-01-05 2003-09-02 Cisco Technology, Inc. Automatic propagation of circuit information in a communications network
US6456599B1 (en) 2000-02-07 2002-09-24 Verizon Corporate Services Group Inc. Distribution of potential neighbor information through an ad hoc network
US6775709B1 (en) 2000-02-15 2004-08-10 Brig Barnum Elliott Message routing coordination in communications systems
US6851005B1 (en) 2000-03-03 2005-02-01 International Business Machines Corporation Apparatus and method for implementing raid devices in a cluster computer system
US6636982B1 (en) 2000-03-03 2003-10-21 International Business Machines Corporation Apparatus and method for detecting the reset of a node in a cluster computer system
US6460149B1 (en) * 2000-03-03 2002-10-01 International Business Machines Corporation Suicide among well-mannered cluster nodes experiencing heartbeat failure
US7426179B1 (en) 2000-03-17 2008-09-16 Lucent Technologies Inc. Method and apparatus for signaling path restoration information in a mesh network
US7035223B1 (en) 2000-03-23 2006-04-25 Burchfiel Jerry D Method and apparatus for detecting unreliable or compromised router/switches in link state routing
US6725274B1 (en) 2000-03-29 2004-04-20 Bycast Inc. Fail-safe system for distributing streaming media having a dynamically reconfigurable hierarchy of ring or mesh topologies
US6977937B1 (en) 2000-04-10 2005-12-20 Bbnt Solutions Llc Radio network routing apparatus
US7573915B1 (en) * 2000-04-25 2009-08-11 Cisco Technology, Inc. Method and apparatus for transporting network management information in a telecommunications network
US6667960B1 (en) * 2000-04-29 2003-12-23 Hewlett-Packard Development Company, L.P. Protocol for identifying components in a point-to-point computer system
US6987726B1 (en) 2000-05-22 2006-01-17 Bbnt Solutions Llc Management of duplicated node identifiers in communication networks
US6885644B1 (en) 2000-05-30 2005-04-26 International Business Machines Corporation Topology propagation in a distributed computing environment with no topology message traffic in steady state
WO2001092992A2 (en) 2000-06-01 2001-12-06 Bbnt Solutions Llc Method and apparatus for varying the rate at which broadcast beacons are transmitted
US7342873B1 (en) 2000-06-06 2008-03-11 Lucent Technologies Inc. Efficient architectures for protection against network failures
US7096275B1 (en) 2000-06-06 2006-08-22 Lucent Technologies Inc. Methods and apparatus for protection against network failures
US7302704B1 (en) 2000-06-16 2007-11-27 Bbn Technologies Corp Excising compromised routers from an ad-hoc network
US6493759B1 (en) 2000-07-24 2002-12-10 Bbnt Solutions Llc Cluster head resignation to improve routing in mobile communication systems
US20030142678A1 (en) * 2000-07-28 2003-07-31 Chan Eric L. Virtual path ring protection method and apparatus
US6973053B1 (en) 2000-09-12 2005-12-06 Bbnt Solutions Llc Using direct cluster member to cluster member links to improve performance in mobile communication systems
WO2002035780A1 (en) * 2000-10-20 2002-05-02 Ciena Corporation A switch fabric architecture and techniques for implementing rapid hitless switchover
US6973039B2 (en) * 2000-12-08 2005-12-06 Bbnt Solutions Llc Mechanism for performing energy-based routing in wireless networks
US6941388B1 (en) * 2000-12-22 2005-09-06 Cisco Technology, Inc. Method and apparatus for handling connections in hot standby line cards
US6704301B2 (en) * 2000-12-29 2004-03-09 Tropos Networks, Inc. Method and apparatus to provide a routing protocol for wireless devices
US7035202B2 (en) 2001-03-16 2006-04-25 Juniper Networks, Inc. Network routing using link failure information
US6836392B2 (en) * 2001-04-24 2004-12-28 Hitachi Global Storage Technologies Netherlands, B.V. Stability-enhancing underlayer for exchange-coupled magnetic structures, magnetoresistive sensors, and magnetic disk drive systems
US7477594B2 (en) * 2001-05-16 2009-01-13 Cisco Technology, Inc. Method for restoring a virtual path in an optical network using 1:N protection
US7155120B1 (en) * 2001-07-30 2006-12-26 Atrica Israel Ltd. Link level network protection path calculation mechanism for use in optical networks
US6766482B1 (en) 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching
US7120456B1 (en) 2001-11-07 2006-10-10 Bbn Technologies Corp. Wireless terminals with multiple transceivers
IL146588A (en) * 2001-11-20 2006-12-31 Eci Telecom Ltd High speed dissemination of failure information in mesh networks
US20030223405A1 (en) * 2002-05-31 2003-12-04 El-Bawab Tarek S. WDM metropolitan access network architecture based on hybrid switching
CA2434115A1 (en) * 2002-12-05 2004-06-05 Telecommunications Research Laboratories Method for design of networks based on p-cycles
US7983239B1 (en) 2003-01-07 2011-07-19 Raytheon Bbn Technologies Corp. Systems and methods for constructing a virtual model of a multi-hop, multi-access network
US7352703B2 (en) 2003-04-29 2008-04-01 Alcatel Lucent Protection scheme for a communications network under multiple failures
US20040246902A1 (en) * 2003-06-02 2004-12-09 Weinstein Joseph J. Systems and methods for synchronizing multple copies of a database using datablase digest
US7362703B1 (en) * 2003-07-10 2008-04-22 Sprint Communications Company L.P. Method for deflection routing of data packets to alleviate link overload in IP networks
US7881229B2 (en) * 2003-08-08 2011-02-01 Raytheon Bbn Technologies Corp. Systems and methods for forming an adjacency graph for exchanging network routing data
US7606927B2 (en) 2003-08-27 2009-10-20 Bbn Technologies Corp Systems and methods for forwarding data units in a communications network
ITMI20031743A1 (en) * 2003-09-11 2005-03-12 Marconi Comm Spa METHOD FOR THE ACTIVATION OF PRE-PLANNED CIRCUITS IN
US7668083B1 (en) 2003-10-28 2010-02-23 Bbn Technologies Corp. Systems and methods for forwarding data in a communications network
US7592894B2 (en) * 2004-06-10 2009-09-22 Ciena Corporation Reconfigurable switch having an overlapping Clos Architecture
US7190633B2 (en) 2004-08-24 2007-03-13 Bbn Technologies Corp. Self-calibrating shooter estimation
US7126877B2 (en) * 2004-08-24 2006-10-24 Bbn Technologies Corp. System and method for disambiguating shooter locations
US7602777B2 (en) * 2004-12-17 2009-10-13 Michael Ho Cascaded connection matrices in a distributed cross-connection system
US7492716B1 (en) * 2005-10-26 2009-02-17 Sanmina-Sci Method for efficiently retrieving topology-specific data for point-to-point networks
US7684354B2 (en) * 2006-08-04 2010-03-23 Schlumberger Technology Corporation Method and system for analyzing the topology of a multiprotocol label switching (MPLS)/virtual private network (VPN) network
US7995914B2 (en) * 2008-03-28 2011-08-09 Mci Communications Services, Inc. Method and system for providing fault recovery using composite transport groups
US8437223B2 (en) * 2008-07-28 2013-05-07 Raytheon Bbn Technologies Corp. System and methods for detecting shooter locations from an aircraft
US8139504B2 (en) * 2009-04-07 2012-03-20 Raytheon Bbn Technologies Corp. System, device, and method for unifying differently-routed networks using virtual topology representations
US8320217B1 (en) 2009-10-01 2012-11-27 Raytheon Bbn Technologies Corp. Systems and methods for disambiguating shooter locations with shockwave-only location
US9369408B1 (en) 2014-01-31 2016-06-14 Google Inc. High performance and resilience in wide area networking
US10110423B2 (en) * 2016-07-06 2018-10-23 Ciena Corporation System and method for managing network connections
US10362117B1 (en) * 2017-06-28 2019-07-23 Rockwell Collins, Inc. Systems and methods for modified network routing based on modal information

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT1199859B (en) * 1985-03-06 1989-01-05 Cselt Centro Studi Lab Telecom HIGH SPEED INTEGRATED LOCAL NETWORK-RECONFIGURABLE CITIES
EP0221360B1 (en) * 1985-11-04 1992-12-30 International Business Machines Corporation Digital data message transmission networks and the establishing of communication paths therein
US4747100A (en) * 1986-08-11 1988-05-24 Allen-Bradley Company, Inc. Token passing network utilizing active node table
JP2574789B2 (en) * 1987-03-06 1997-01-22 株式会社日立製作所 Digital network control method.
US4920529A (en) * 1987-02-27 1990-04-24 Hitachi, Ltd. Network control method and apparatus therefor
JPH0213154A (en) * 1988-06-30 1990-01-17 Fujitsu Ltd Bypass route selection processing system
GB8817288D0 (en) * 1988-07-20 1988-08-24 Racal Milgo Ltd Methods of & networks for information communication
JPH0265335A (en) * 1988-08-31 1990-03-06 Fujitsu Ltd Updating system for detouring pass table

Also Published As

Publication number Publication date
JPH05506976A (en) 1993-10-07
WO1991015066A1 (en) 1991-10-03
EP0525121A4 (en) 1993-06-30
US5093824A (en) 1992-03-03
EP0525121A1 (en) 1993-02-03

Similar Documents

Publication Publication Date Title
CA2034651A1 (en) Distributed protocol for improving the survivability of telecommunications trunk networks
Gerstel et al. Fault tolerant multiwavelength optical rings with limited wavelength conversion
Grover et al. Cycle-oriented distributed preconfiguration: ring-like speed with mesh-like capacity for self-planning network restoration
US6421349B1 (en) Distributed preconfiguration of spare capacity in closed paths for network restoration
Grover et al. Bridging the ring-mesh dichotomy with p-cycles
US5412376A (en) Method for structuring communications network based on asynchronous transfer mode
US5884017A (en) Method and system for optical restoration tributary switching in a fiber network
US7133359B2 (en) Fast restoration mechanism and method of determining minimum restoration capacity in a transmission networks
Coan et al. Using distributed topology update and preplanned configurations to achieve trunk network survivability
US20080101364A1 (en) Inter-working mesh telecommunications networks
WO1997024900A9 (en) Method and system for optical restoration tributary switching in a fiber network
US6810496B1 (en) System and method for troubleshooting a network
Wu A passive protected self-healing mesh network architecture and applications
Flanagan Fiber network survivability
Falconer Service assurance in modern telecommunications networks
EP2171937B1 (en) Protection mechanisms for a communications network
Johnson Survivability strategies for broadband networks
Shi et al. Analysis and design of survivable telecommunications networks
Coan et al. A distributed protocol to improve the survivability of trunk networks
US5734811A (en) Segment substitution/swap for network restoration pre-plans
EP0484943A2 (en) Method for restructuring communications network based on asynchronous transfer mode in case of failure
Shi et al. Interconnection of self-healing rings
Iraschko et al. A distributed real time path restoration protocol with performance close to centralized multi-commodity max flow
US20030086367A1 (en) Method for monitoring spare capacity of a dra network
US6950883B1 (en) Ring capacity efficient architecture

Legal Events

Date Code Title Description
FZDE Discontinued