WO2001095585A2 - Message queue server system - Google Patents

Message queue server system Download PDF

Info

Publication number
WO2001095585A2
WO2001095585A2 PCT/US2001/017858 US0117858W WO0195585A2 WO 2001095585 A2 WO2001095585 A2 WO 2001095585A2 US 0117858 W US0117858 W US 0117858W WO 0195585 A2 WO0195585 A2 WO 0195585A2
Authority
WO
WIPO (PCT)
Prior art keywords
information
mainframe
queue
data
manager
Prior art date
Application number
PCT/US2001/017858
Other languages
French (fr)
Other versions
WO2001095585A3 (en
Inventor
Graham G. Yarbrough
Original Assignee
Inrange Technologies Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inrange Technologies Corporation filed Critical Inrange Technologies Corporation
Priority to AU2001275151A priority Critical patent/AU2001275151A1/en
Priority to CA002381189A priority patent/CA2381189A1/en
Publication of WO2001095585A2 publication Critical patent/WO2001095585A2/en
Publication of WO2001095585A3 publication Critical patent/WO2001095585A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/59Providing operational support to end devices by off-loading in the network or by emulation, e.g. when they are unavailable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/08Protocols specially adapted for terminal emulation, e.g. Telnet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/565Conversion or adaptation of application format or content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/08Protocols for interworking; Protocol conversion

Definitions

  • a server at one end of the Internet can provide airline flight data to a personal computer in a consumer's home. The consumer can then make flight arrangements, including paying for the flight reservation, without ever having to speak with an airline agent or having to travel to a ticket office. This is but one scenario in which open systems are used.
  • mainframe computer One type of computer system that has not "kept up with the times" is the mainframe computer.
  • a mainframe computer was at one time considered a very sophisticated computer, capable of handling many more processes and transactions than the personal computer.
  • the mainframe computer is not an open system, its processing abilities are somewhat reduced in value since legacy data that are stored on tapes and read by the mainframes via tape drives are unable to be used by open systems, the airline scenario discussed above, the airline is unable to make the mainframe data available to consumers.
  • FIG. 1 illustrates a present day environment of the mainframe computer.
  • the airline Airline A, has two mainframes, a first mainframe 100a (Mainframe A) and a second mainframe 100b (Mainframe B).
  • the mainframes may be in the same room or may be separated by a building, city, state or continent.
  • the mainframes 100a and 100b have respective tape drives 105a and 105b to access and store data on data tapes 115a and 115b corresponding to the tasks with which the mainframes are charged.
  • Respective local tape storage bins 110a and 110b store the data tapes 115a, 115b.
  • a technician 120a servicing Mainframe A loads and unloads the data tapes 115a.
  • the tape storage bin 110a may actually be an entire warehouse full of data tapes 115a.
  • the technician 120a retrieves a data tape 115a and inserts it into tape drive 105a of Mainframe A.
  • a technician 120b services Mainframe B with its respective data tapes 115b.
  • the second technician 120b must retrieve the tape and send it to the first technician 120a, who inserts it into the Mainframe A tape drive 105a. If the mainframes are separated by a large distance, the data tape 115b must be shipped across this distance and is then temporarily unavailable by Mainframe B.
  • FIG. 2 is an illustration of a prior art channel-to-channel adapter 205 used to solve the problem of data sharing between Mainframes A and B that reside in the same location.
  • the channel-to-channel adapter 205 is in communication with both Mainframes A and B.
  • Mainframe A uses an operating system having a first protocol, protocol A
  • Mainframe B uses an operating system having a second protocol, protocol B.
  • the channel-to-channel adapter 205 uses a third operating system having a third protocol, protocol C.
  • the adapter 205 negotiates communications between Mainframes A and B. Once the negotiation is completed, the Mainframes A and B are able to. transmit and receive data with one another according to the rules negotiated.
  • the legacy applications may be written in relatively archaic programming languages, such as COBOL. Because many of the legacy applications are written in older programming languages, the legacy applications are difficult enough to maintain, let alone upgrade, to use the channel-to-channel adapter 205 to share data between the mainframes.
  • Message queuing facilities help applications in one computing system communicate with applications in another computing system by using queues to insulate or abstract each other's differences.
  • the sending application "connects” to a queue manager (a component of the MQF) and “opens” the local queue using the queue manager's queue definition (both the "connect” and “open” are executable "verbs” in a message queue series (MQSeries) application programming interface [API]).
  • MQSeries message queue series
  • API application programming interface
  • the application can then "put" the message on the queue.
  • an MQF typically commits the message to persistent storage, typically to a direct access storage device (DASD). Once the message is committed to persistent storage, the MQF sends the message via the communications stack to the recipient's complementary and remote MQF.
  • DASD direct access storage device
  • the remote MQF commits the message to persistent storage and sends an acknowledgment to the sending MQF.
  • the acknowledgment back to the sending queue manager permits it to delete the message from the sender's persistent storage.
  • the message stays on the remote MQF's persistent storage until the receiving application indicates it has completed its processing of it.
  • the queue definition indicates whether the remote MQF must trigger the receiving application or if the receiver will poll the queue on its own.
  • the use of persistent storage facilitates recoverability. This is known as "persistent queue.”
  • the receiving application is informed of the message in its local queue (i.e., the remote queue with respect to the sending application), and it, like the sending application, "connects” to its local queue manager and “opens” the queue on which the message resides.
  • the receiving application can then execute "get” or “browse” verbs to either read the message from the queue or just look at it.
  • the persistent queue storage used by the MQF is logically an indexed sequential data set file.
  • the messages are typically placed in the queue on a first-in, first-out (FIFO) basis, but the queue model also allows indexed access for browsing and the direct access of the messages in the queue.
  • MQF is helpful for many applications, current MQF and related software utilize considerable mainframe resources. Moreover, modern MQF's have limited, if any, functionality allowing shared queues to be supported.
  • the I/O adapter device of Whitney includes a storage controller that has a processor and a memory.
  • the controller receives 170 commands having corresponding addresses.
  • the logic is responsive to the I O commands and determines whether an I/O command is within a first set of predetermined I/O commands. If so, the logic maps the I/O command to a corresponding message queue verb and queue to invoke the MQF. From this, the MQF may cooperate with the communications stack to send and receive information corresponding to the verb.
  • the problem with the solution offered by Whitney is similar to that of the adapter 205 (FIG. 2) in that the legacy applications of the mainframe must be rewritten to use the protocol of the MQF.
  • the problem with the solution offered in U.S. Patent No. 5,906,658 by Raz is, as in the case of Whitney, legacy applications on mainframes must be rewritten in order to allow the plurality of processes to share data.
  • the present invention addresses the issue of having to rewrite legacy applications in mainframes by using the premise that mainframes have certain peripheral devices.
  • mainframes have tape drives, and, consequently, the legacy applications operating on the mainframes have the ability to read and write from tape drives. Therefore, the present invention addresses the problems and shortcomings of the prior art systems by providing a message queue server that emulates a tape drive that not only supports communication between two mainframes, but also provides a gateway to open systems computers, networks, and other similar message queue servers.
  • the principles of the present invention provide protocol-to-protocol conversion from mainframes to today's computing systems in a manner that does not require businesses that own the mainframes to rewrite legacy applications to share data with other mainframes and open systems.
  • the system includes a device emulator coupled to a first device, such as a mainframe computer, having a first protocol.
  • the system includes digital storage to temporarily store information from the first protocol.
  • the system also includes at least one manager that (i) coordinates the transfer of the information of the first protocol between the device emulator and the digital storage and (ii) coordinates transfer of the information between the digital storage and a device having a second protocol.
  • the device emulator is a tape drive emulator.
  • the information is arranged in a queue in the digital storage.
  • Another aspect of the present invention includes a manager for protocol conversion.
  • the system includes at least one I/O manager having intelligence to support states of emulation devices transceiving messages using a first protocol and an interface transceiving messages using a second protocol.
  • the system includes at least one emulation device providing low-level control reaction to an external device adhering to the first protocol.
  • At least one group driver is included to provide an interface between the 170 manager and the emulation device(s).
  • the emulation device emulates a tape drive.
  • Yet another aspect of the present invention is a system for mainframe-to- mainframe connectivity.
  • the system emulates a computer peripheral that is in communication with a first mainframe.
  • the first device emulator acts as a standard sequential storage device.
  • the system also includes a second device emulator in communication with a second mainframe.
  • the second device emulator also acts as a standard sequential storage device.
  • Digital storage is coupled to the first and second device emulators to store information temporarily for the first and second device emulators.
  • the system also includes at least one manager that (i) coordinates a first transfer of information between the first device emulator and the digital storage and (ii) coordinates a second transfer of information from the digital storage to the second device emulator.
  • the first and second mainframes have access to the information via respective device emulators.
  • the information stored in the digital storage is arranged in a queue.
  • Yet another aspect of the present invention includes a method and apparatus for managing messages in a data storage system.
  • the data storage system receives information that is normally contained in a standard tape label. Based on the information, a controller applies the information to a non-tape memory designated for a message queue. The controller stores messages related to the information in the memory. The controller also manages the message queue as a function of the standard tape label information. Examples of standard tape label information that is acted on by the controller include: volume serial number, data set name, expiration date, security attributes, and data characteristics.
  • the various aspects of the present invention can be used in a network environment.
  • mainframes connected to the emulators can be located in a closed network containing two mainframes and the emulator protocol-to-protocol conversion system, where messages are transferred from one mainframe to the other mainframe by transferring messages to the memory supporting the emulators en route to the other mainframe.
  • the mainframes need not be in a closed network.
  • the system includes a device emulator connecting to the mainframes, processor for executing software servicing message queues, memory for storing the message queues, and a network interface card, such as a TCP/IP interface card connecting to a TCP/IP network to transfer the messages in a packetized manner from the first mainframe to at least one other mainframe.
  • a network interface card such as a TCP/IP interface card connecting to a TCP/IP network to transfer the messages in a packetized manner from the first mainframe to at least one other mainframe.
  • the messages can be transferred to other memories supporting other device emulators via any middleware interface, commercial or customized, to transfer the messages to the other mainframe(s).
  • the messages can be transferred to any open system computer or computer network.
  • FIG. 1 is an illustration of an environment in which mainframe computers are used with computer tapes to share data among the mainframe computers;
  • FIG. 2 is a block diagram of a prior art solution to sharing data between mainframes without having to physically transport tapes between the mainframes, as in the environment of FIG. 1;
  • FIG. 3 is a block diagram in which a mainframe is able to share data with an open system computer network via a queue server according to the principles of the present invention
  • FIG. 4 is a detailed block diagram of the queue server of FIG. 3;
  • FIG. 5 is a block diagram of an I/O manager, employed by the queue server of FIG. 4, having a device table database
  • FIG. 6 is an illustration of an environment in which the queue server of FIG. 3 is used by mainframes to share data
  • FIG. 7 is a block diagram of an environment in which mainframes are able to share data with other mainframes via the queue server of FIG. 3 over long distances through the use of wide area networks.
  • Fig. 3 is a block diagram of a mainframe 100a (Mainframe A) in communication with a queue server 300 employing the principles of the present invention.
  • the queue server 300 is in communication with a computer network 350 (e.g., the Internet) and an open system computer 345.
  • a computer network 350 e.g., the Internet
  • an open system computer 345 e.g., the Internet
  • Mainframe A has an operating system and legacy applications, such as applications written in COBOL.
  • the operating system and legacy applications are not inherently capable of communicating with todays open systems computer networks and computers.
  • Mainframe A does have data useful to open systems and other mainframes (not shown), so the queue server 300 acts as a transfer agent between Mainframe A and computers connected to the open systems computer networks and computers.
  • the queue server 300 acts as a transfer agent between Mainframe A and computers connected to the open systems computer networks and computers.
  • the channel 312 includes three components: a communication link 305 and two interface cards 310, one located at Mainframe A and the other at the queue server 300.
  • the interface card 310 located in the queue server may support block message transfers and non- olatile memory, as described in U.S . Provisional Patent Application No. 60/209,054, filed June 2, 2000, entitled “Enhanced EET-3 Channel Adapter Card,” by Haulund et al. and co-pending U.S. Patent Application, filed concurrently herewith, entitled “,” by Haulund et. al., the entire teachings of both are incorporated herein by reference.
  • Mainframe A also receives information from the queue server 300 over the same channel 312.
  • the channel 312 is basically transparent to Mainframe A and the queue server 300.
  • Mainframes such as Mainframe A
  • Mainframes have traditional device peripherals that support the mainframes.
  • mainframes are capable of communicating with printers and tape drives. That means that the applications running on the operating system on Mainframe A have the "hooks" for communicating with a printer and tape drive.
  • the queue server 300 takes advantage of this commonality among mainframes by providing an interface to the legacy applications with which they are already familiar.
  • the queue server 300 has a device emulator 315 that serves as a transceiver with the legacy applications via the channels 312.
  • the queue server 300 emulates a peripheral known to mainframes.
  • the device emulator 315 is composed of multiple tape drive emulators 320.
  • the tape drive emulators 320 are merely software instances that interact with the interface cards 310.
  • the tape drive emulators 320 provide low-level control reactions that adhere to the stringent timing requirements of traditional commercial tape drives that mainframes use to read and write data. In this way, the legacy applications are under the impression that they are simply reading and writing data from and to a tape drive, unaware that the data is being transferred to computers using other protocols.
  • the data received by the tape drive emulators 320 are provided to memory 330, as supported by a protocol transfer manager 325. Once in memory 330, the data provided by the legacy applications are then capable of being transferred to commercial messaging middleware 335.
  • the commercial messaging middleware 335 is also supported by the protocol transfer manager 325, which supports read/write transactions of the commercial messaging middleware 335 with the memory 330 and higher-level administrative activities.
  • the commercial messaging middleware 335 interfaces with an interface card 338, such as a TCP/IP interface card, that connects to a modern computer network, such as the Internet 350, via any type of network line 340.
  • the network line 340 could be a fiber optic cable, local area network cable, wireless interface, etc.
  • a desktop computer 345 could be directly coupled to the commercial messaging middleware 338 via the network line 340 and TCP/IP interface card 338.
  • the queue server 300 is solving the problem of getting data from mainframes into a standard, commercial environment that is easily accessed by today's commercial programs.
  • the commercial programs may then use the data from the mainframes to publish, filter, or transform the data for later use by, for example, airline representatives, agents, or consumers who wish to access the data for flight planning or other reasons.
  • the queue server 300 may also act as an interface between various operating systems of mainframes. For example, a TPF (transaction processing facility) mainframe operating system used for reservations and payment transactions can transfer the TPF data to a mainframe using a NM (virtual machine) mainframe operating system or to a mainframe using the MNS (multiple virtual storage) mainframe operating system.
  • the queue server 300 allows data flow between the various mainframes by temporarily storing data in messages in persistent message queues in the memory 330.
  • the memory 330 is not intended as a permanent storage location as in the case of a physical reel tape, but will retain the messages containing the data until instructed to discard them.
  • the messages stored in the memory 330 are typically arranged in a queue in the same manner as messages are stored on a tape drive because the legacy systems are already programmed to store the data in that manner. Therefore, the legacy applications on the mainframes do not need to be rewritten in any way to transmit and receive data from the memory 330.
  • the queue is logically an indexed sequential data set file, which may also use various queuing models, such as first-in, first out (FIFO); last-in, first out (LIFO); or priority queuing models. It should be understood that the memory 330 is very large (e.g., terabytes) to accommodate all the data that is usually stored on large computer tapes.
  • the data exchange between the mainframes can be done in near real-time or non-real time, depending on the length of the queue. For example, if the queue storing the messages has a length of one message, then the data exchange is near- real-time since the message is forwarded to the receiving mainframe once the queue is full with the one message. If the length of the queue is several hundred messages, then data from a first mainframe is written until the queue is filled, and then the data is transferred to the second mainframe in a typical tape drive-to-mainframe manner.
  • the channel 312 typically transfers messages on a message-by-message basis.
  • Fig. 4 is a detailed block diagram of the queue server 300.
  • the queue server 300 allows storage of many messages at a time, which allows the protocol transfer manager 325 to configure the tape drive emulators 320 in a mode supporting direct memory access (DMA) transfer of messages to improve data flow in the emulator-to-memory link of the data flow.
  • Fig. 4 is a detailed block diagram of the queue server 300.
  • the queue server
  • 300 has (i) a front-end that includes adapter cards 310 and tape drive emulators 320, (ii) a protocol transfer manager 325 that include software processes, and (iii) a back- end that includes networking middleware 335 and network interface card 228, where the networking middleware 335 is connected to a network line 340 via the network interface card 338.
  • the adapter cards 310 and tape drive emulators 320 compose a device emulator 315.
  • a single tape drive emulator 320 is coupled to and supporting a single adapter card 310.
  • the tape drive emulator 320 is embodied as one or more software instances, there can be many tape drive emulators connecting to a single adapter card 310, and vise- versa.
  • the tape drive emulator 320 and I/O manager 400 support the standard, channel command words provided by legacy applications operating on a mainframe, such as Mainframe A.
  • the channel command words include read, write, mount, dismount, and other tape drive commands that are normally used to control a tape drive.
  • the tape drive emulators 320 emulate a different mainframe peripheral device; in that case, the tape drive emulators 320 support a different, respective, set of command words provided by the legacy applications for communicating with that different mainframe peripheral device.
  • the group driver 405. located between I/O manager 400 and the tape drive emulators 320 is at least one group driver 405.
  • the group drivers 405 are also software instances, as in the case of the tape drive emulators 320.
  • the group drivers 405 are intended to off-load some of the processing required by the I/O manager 400 so that the I/O manager does not have to interface directly with each of the tape drive emulators 320.
  • Each group driver 405 provides interface support for one or more associated tape drive emulator(s) 320 and the I/O manager 400.
  • the group drivers 405 multiplex signals from the number of tape drive emulators 320 with which they are associated. Because the group drivers 405 are software instances, any number of group drivers 405 can be provided to support the tape drive emulators 320.
  • the I/O manager 400 is a software instance, there can be many I/O managers 400 operating in the queue server 300.
  • the protocol transfer manager 325 can be configured to provide parallel processing functionality for the mainframes and open systems being serviced.
  • the queue server 300 is composed of electronics that include computer processors on which the I/O manager 400, group drivers 405, tape drive emulators 320, and commercial messaging middleware 335 are executed. There may be several processors for parallel or distributed processing.
  • the queue server 300 also includes other circuitry to allow the computer processors to interface with the adapter cards 310, memory 330, and TCP/IP interface card 338.
  • the queue server 300 may include additional memory (not shown), such as RAM, ROM, and/or magnetic or optical disks to store the software listed above.
  • the memory both for the software and the queues is preferably local to the queue server 300, but may be remote and accessed over a local area network or wide area network, h the case of the queues, the delay in accessing the memory will cause additional latency in transferring the messages, but will not affect the interaction with the mainframes that require rapid response to requests since the tape drive emulators 320 handle that function.
  • the messages are stored as queues 415a, 415b, ..., 415 « (collectively 415) in a volume 410, as in the case of a standard tape.
  • the queues 415 are managed by using information that is normally contained in a standard tape label. For example, to build the queue name, the volume serial number and data set name is used in one embodiment. Another piece of data that is normally contained in a standard tape label is an expiration date, which allows the I/O manager 400 to decide how long to retain the message queue 415 in the memory 330.
  • Security attributes found in a standard tape label are used by the I/O manager 400 to apply security attributes to the messages in the respective queues 415.
  • Other information contained in the standard tape label may be used by the J7O manager 400 to optimize the messages in the queue based on the data characteristics of the messages. Mounting the queue, which is done by selecting the pointer (i.e., software pointer storing the hexadecimal memory location) pointing to the head of the queue, is performed by the I/O manager 400 based on receiving a volume ID or data set name request message from the Mainframe A. It should be understood that the management features based on the standard tape label information just provided is merely exemplary of the types of actions that can be performed by the I/O manager 400 in managing the queues. Another feature, for example, is a tape mark action that marks an indicator within the associated message queue.
  • Mainframe A provides many commands to the queue server 300 for handling messages in queues. These commands are typical of communication with a real tape drive, but here, the tape drive emulators 320 receive the commands and either (i) provide fast response to Mainframe A in response to those commands or (ii) allow the commands to pass unfettered to the I/O manager 400 for administrative non-real-time processing.
  • the following discussion provides write and read operations that occur during typical interaction between Mainframe A and the queue server 300.
  • the Tape Volume Id is specified on a JCL (Job Control Library), which runs the job in question.
  • Mainframe A initiates the tape operation by sending an LDD CCW (Load Display Device Channel Command Word), which identifies the specific tape to be mounted and the "device" on which to mount it.
  • LDD CCW Liad Display Device Channel Command Word
  • the "device” is a tape drive, which is being emulated by the tape drive emulator 320.
  • This CCW i.e., the 'command' sent on the channel 312
  • This CCW is received by the channel-to-channel adapter card 310 and intercepted by the tape drive emulator 320.
  • the tape drive emulators 320 then sends notice to the group driver 405 via an interdriver control message (MOUNT_REQUEST_RECEiNED), which contains the information sent via the channel 312.
  • an interdriver control message (MOUNT_REQUEST_RECEiNED)
  • the group driver 405 receives the message, determines its ultimate destination (i.e., the individual application or thread controlling the specific tape drive emulator 320 and queue 415), and places the message into a control message for delivery to the I/O manager 400, where the I/O manager 400 is the major component of the protocol transfer manager 325, also referred to as a SMART (system for message addressing routing and translation).
  • SMART system for message addressing routing and translation
  • the I/O manager 400 uses the Tape Volumeld contained in the message to 'lookup' the queue associated with the Tape Volumeld.
  • the I/O manager 400 uses the Virtual Tape Library (VTL - an internal process within the queue server 300) to perform this lookup function.
  • VTL uses a local database, described in reference to Fig. 5, to provide a mapping between the queuing engine's (i.e., I/O manager 400 and group driver 405) data message queues 415 (not to be confused with internal interdriver queues, not shown, between the tape drive emulators 320 and group drivers 405) and the tape Volumelds requested by the mainframe job.
  • the VTL assigns an arbitrary Id from its pool of preassigned IDs; if the request is for a specific ID, the specific ID is used. Regardless of the source, the ID is associated with a message queue (e.g., queue 415a). If the requested message queue 415a exists (i.e., the I/O manager 400 is reusing an existing queue), the requested message queue 415a is cleared of existing messages; otherwise, a new queue is created. The queue returned is associated (sometimes referred to as 'partnered' or
  • the I/O manager 400 then notifies the group driver 405 to 'release' the mainframe/channel, which has been 'waiting' patiently for the channel/tape drive emulator to return 'OK' to its mount request.
  • the group driver 405 formats and sends an interdriver 'release' message to the tape driver emulator 320, which issues the necessary channel commands to release the channel 312, Mainframe A, and itself for further activity.
  • Mainframe A most likely next sends a tape label (three short data records containing information about the data to be written) via the channel 312 to the tape drive emulator 320.
  • This tape label information packaged into an interdriver message (TAPE_LABEL_RECEIVED), is intercepted by the tape drive emulator 320 and sent to the group driver 405.
  • the group driver 405 passes this tape label information to the I/O manager 400.
  • the tape label information is used to 'name' the associated message queue 415.
  • the tape label information is then attached to the message queue 415 in the same way that tape label information is attached to a real (i.e., physical) tape volume.
  • the information in the tape label remains with the message queue 415a and is 'played back' to Mainframe A when if the message queue 415a is read.
  • the I/O manager 400 notifies ('releases') Mainframe A by passing a message to the group driver 405, which sends the message to the tape driver 320, which notifies the channel 312, etc. Following the release, Mainframe A begins sending data messages as if it were sending the data messages to a real tape drive. These messages are placed, under software control, directly into the main shared memory buffer pools 330 (Fig. 3) (via hardware driven DMA - Direct Memory Access, controlled by dedicated hardware, such as IBM® EET® chips residing in the channel-to-channel adapter card 310), which are visible to the queue server 300 components.
  • data messages are not copied; only pointers to the internal shared buffers are moved as interdriver messages between the tape drive emulator 320 and group driver 405.
  • Pointers to data messages are passed as interdriver messages from the tape drive emulator 320 to the group driver 405 and are queued to the correct I/O manager 400.
  • the I/O manager 400 reads the interdriver message queue (not shown), references the data message buffer (not shown), and moves the message to the associated message queue 415a. After the queue signals to the I/O manager 400 that the message is properly safe-stored, the I/O manager 400 notifies the tape drive emulator 320, via a message to the group driver 405, to release the channel 312 to Mainframe A. This sequence continues until Mainframe A sends a TAPEMARK (a special
  • the tape drive emulator 320 intercepts this CCW and passes it to the I O manager 400 as a control message via the group driver 405. After the I/O manager 400 receives the TAPEMARK, it closes the message queue 415 and disassociates it from the tape drive emulator 420. Mainframe A next sends a trailing label followed by REWIND (and/ or
  • the I/O manager is notified of the command and completes the disassociation of the tape drive emulator 320 and queue 415a.
  • the I/O Manager 400 then recycles the tape drive emulator 320 for another mainframe request.
  • Mainframe READ operation READ operations differ very little from WRITE operations.
  • the channel/mainframe first sends a request to mount a specific tape volume (e.g., volume 410a).
  • the volume 410a and its associated queue 415a must exist. Lookup is performed by the VTL.
  • the I/O manager 400 associates the tape drive emulator 320 with the requested queue 415a, it passes the information from the stored label to the tape drive emulator 320, which presents it to the channel 312 in response to a READ CCW. (This simulates a real tape device presenting the real tape label from the tape.)
  • Mainframe A Once Mainframe A has 'read' and verified the label, it sends a series of READ CCWs. These are passed to the I/O manager 400 as control messages. Each read results in the I/O manager's 400 presenting the 'next' data message from the queue 415a to the tape drive emulator 320 for delivery to the channel 312.
  • the I/O manager 400 When the last message is read from the queue 415a, the I/O manager 400 notifies the tape drive emulator 320, via a WRITE_TAPEMARK control command, and the tape drive emulator 320 simulates a TAPEMARK status to the channel 312. Mainframe A then initiates 'close' processing during which the I/O manager 400 disassociates the queue 410a and tape drive emulator 320.
  • Mainframe A then sends a REWIND or UNLOAD command via the channel 312. This is passed to the I/O manager 400, which completes the tape drive emulator 320 and queue 410a disassociation.
  • the tape drive emulator 320 enters an idle state and is available to be associated with another queue (e.g., queue 410b).
  • Fig. 5 is a block diagram of the I/O manager 400 and its associated device table database 500.
  • the device table database 500 is used to initialize various components in the queue server 300.
  • the device table database 500 includes a device name field, operation mode field, default channel configuration, queue name, file pointer name (pName), etc. These fields are (i) representative of the types of actions executed by a real tape drive and (ii) associated with actions requested of a real tape drive.
  • the state of the fields in the device table database 500 configure the tape drive emulator 325 for interfacing with the commands/requests from the legacy applications in Mainframe A.
  • Timing specifications, block size, date, time, labeled/not labeled, channel status, and other relevant information specific to the mainframes, mainframe operating system, or legacy applications are stored so as to respond to signals from the adapter cards 310 in a manner expected by the channels 312 and mainframes 100.
  • the device table database 500 may also include information for configuring the adapter cards 310. Further, the device table database may include information for interfacing with the networking middleware 335 and/or TCP/IP card 338.
  • the device table database 500 is typically accessed during initialization of the queue server 300.
  • the device table database 500 may specify the number of tape drive emulators 320 that are used in the queue server 300 to support the adapter cards 310, the number of group drivers supporting the I/O manager 400 in communicating with the tape drive emulators 320, and the number of I/O managers 400 used by the queue server 300.
  • the device table database 500 may also specify the locations of the volumes 410 within the memory 330 and queues 405 within the volumes 410 (Fig. 4). It should be understood that the device table database 500 can be expanded and upgraded, as necessary.
  • Fig. 6 is a block diagram of a closed network 600 in which the queue server 300 is used to provide protocol conversion among four mainframes. As shown, Mainframes A-D have channels coupling them to the queue server 300.
  • a queue 410 has been set up to store messages from Mainframe D. Following Mainframe D message storage, Mainframe A requests the messages in the queue 410.
  • Mainframe A may have requested data that the I/O manager 400 knows to be stored on a Mainframe D tape.
  • the I/O manager may cause a message to be displayed to a technician to have the data loaded by Mainframe D and stored to a message queue 410 for retrieval by Mainframe A.
  • Mainframe D writes data to the queue 410 in a manner typical of writing to a tape drive.
  • Mainframe A reads the messages in the queue 410 in a manner typical of reading from a tape in a tape drive.
  • the I/O manager 400 (Fig. 4) and group drivers 405 (Fig. 4) support the tape drive emulators 320 during the read and write processes.
  • protocol A operating in Mainframe A receives data from protocol D in Mainframe D without having to rewrite legacy applications in either mainframe.
  • This protocol conversion is supported by the commonality of the mainframes to interface with a tape drive, but which is supported by the emulation of a tape drive by the queue server 300. Note that if the length of the queue 410 is reduced to having a length of one message, then the protocol conversion from protocol D to protocol A is near real-time.
  • Fig. 7 is a block diagram of an exemplary open network 700 having several queue servers 300 supporting mainframes in various cities about the United States.
  • the application here is an airline, Airline A, that wishes to make its mainframe data available to other mainframes around the country for various offices of airline representatives, agents, and consumers having connections to the open network 700.
  • Airline A has two mainframes 100a, 100b, connected to a queue server 300.
  • the mainframes 100a, 100b can share each other's data through the use of the associated queue server.
  • the mainframes 100a, 100b can share data with other mainframes via the queue server 300 and networking middleware 335 (Fig. 3).
  • the queue server 300 is connected to a wide area network 350.
  • the wide area network 350 is connected to another wide area network 350 (e.g., the Internet) and another queue server 300, which is located in New York.
  • the queue server 300 located in New York supports an associated mainframe lOOe, which is owned by Airline B.
  • Airline B may, for instance, be a subsidiary of Airline A or a business partner, such as an independent, international, airline affiliate.
  • Personnel associated with Airline B may wish to access data from Airline A, such as passenger route information, transaction reports, etc.
  • Airline A also has a mainframe 100c in Chicago having an associated queue server 300 that provides connections to the wide area network 350, which provides connection to the queue server 300 in New York and distal connection to the queue server 300 in Boston.
  • personnel in Chicago connected to the Chicago mainframe 100c have access to data in Boston and New York.
  • the personnel in Chicago have access to data stored on tapes or in the mainframes located in Denver, mainframe lOOd, and Los Angeles, mainframe lOOf.
  • the queue servers 300 provide protocol-to-protocol conversion between the protocols of operating systems running the mainframes 100a, 100b and network protocols, such as the TCP/IP protocols.
  • Commercial subsystems are used where appropriate (e.g., commercial messaging middleware 335 and TCP/IP interface card 338) within the queue servers 300 so as to have the queue servers 300 be compatible with the latest and/or legacy open systems architectures.

Abstract

A message queue server emulates a computer peripheral that not only supports communication between two mainframes, but also provides a gateway to open systems computers, networks, and other similar message queue servers. The message queue server provides protocol-to-protocol conversion from mainframes to today's computing systems in a manner that does not require businesses that own the mainframes to rewrite legacy applications to share data with other mainframes and open systems. The message queue server emulates a mainframe peripheral coupled to a first mainframe having a first protocol. The system includes digital storage to temporarily store information from the first mainframe. The system includes at least one manager that (i) coordinates the transfer of the information of the first protocol between the mainframe peripheral emulator and the digital storage and (ii) coordinates transfer of the information between the digital storage and (a) a second mainframe having a second protocol or (b) a computer network having a third protocol. Preferably, the message queue server emulates a tape drive and arranges the stored messages in a queue. Optionally, the message queue server manages the message queues as a function of information usually found in a standard tape label.

Description

MESSAGE QUEUE SERVER SYSTEM
BACKGROUND OF THE INVENTION
Today's computing networks, such as the Internet, have become so widely used, in part, because of the ability for the various computers connected to the networks to share data. These networks and computers are often referred to as "open systems" and are capable of sharing data due to commonality among the data handling protocols supported by the networks and computers. For example, a server at one end of the Internet can provide airline flight data to a personal computer in a consumer's home. The consumer can then make flight arrangements, including paying for the flight reservation, without ever having to speak with an airline agent or having to travel to a ticket office. This is but one scenario in which open systems are used.
One type of computer system that has not "kept up with the times" is the mainframe computer. A mainframe computer was at one time considered a very sophisticated computer, capable of handling many more processes and transactions than the personal computer. Today, however, because the mainframe computer is not an open system, its processing abilities are somewhat reduced in value since legacy data that are stored on tapes and read by the mainframes via tape drives are unable to be used by open systems, the airline scenario discussed above, the airline is unable to make the mainframe data available to consumers.
FIG. 1 illustrates a present day environment of the mainframe computer. The airline, Airline A, has two mainframes, a first mainframe 100a (Mainframe A) and a second mainframe 100b (Mainframe B). The mainframes may be in the same room or may be separated by a building, city, state or continent.
The mainframes 100a and 100b have respective tape drives 105a and 105b to access and store data on data tapes 115a and 115b corresponding to the tasks with which the mainframes are charged. Respective local tape storage bins 110a and 110b store the data tapes 115a, 115b. During the course of a day, a technician 120a servicing Mainframe A loads and unloads the data tapes 115a. Though shown as a single tape storage bin 110a, the tape storage bin 110a may actually be an entire warehouse full of data tapes 115a. Thus, each time a new tape is requested by a user of Mainframe A, the technician 120a retrieves a data tape 115a and inserts it into tape drive 105a of Mainframe A.
Similarly, a technician 120b services Mainframe B with its respective data tapes 115b. In the event an operator of Mainframe A desires data from a Mainframe B data tape 115b, the second technician 120b must retrieve the tape and send it to the first technician 120a, who inserts it into the Mainframe A tape drive 105a. If the mainframes are separated by a large distance, the data tape 115b must be shipped across this distance and is then temporarily unavailable by Mainframe B.
FIG. 2 is an illustration of a prior art channel-to-channel adapter 205 used to solve the problem of data sharing between Mainframes A and B that reside in the same location. The channel-to-channel adapter 205 is in communication with both Mainframes A and B. In this scenario, it is assumed that Mainframe A uses an operating system having a first protocol, protocol A, and Mainframe B uses an operating system having a second protocol, protocol B. It is further assumed that the channel-to-channel adapter 205 uses a third operating system having a third protocol, protocol C. The adapter 205 negotiates communications between Mainframes A and B. Once the negotiation is completed, the Mainframes A and B are able to. transmit and receive data with one another according to the rules negotiated.
In this scenario, all legacy applications operating on Mainframes A and B have to be rewritten to communicate with the protocol of the channel-to-channel adapter 205. The legacy applications may be written in relatively archaic programming languages, such as COBOL. Because many of the legacy applications are written in older programming languages, the legacy applications are difficult enough to maintain, let alone upgrade, to use the channel-to-channel adapter 205 to share data between the mainframes.
Another type of adapter used to share data among mainframes or other computers in heterogeneous computing environments is described in U.S. Patent No. 6,141,701, issued October 31, 2000, entitled "System for, and Method of, Off- Loading Network Transactions from a Mainframe to an Intelligent Input/Output Device, Including Message Queuing Facilities," by Whitney. The adapter described by Whitney is a message oriented middleware system that facilitates the exchange of information between computing systems with different processing characteristics, such as different operating systems, processing architectures, data storage formats, file subsystems, communication stacks, and the like. Of particular relevance is the family of products known as "message queuing facilities" (MQF). Message queuing facilities help applications in one computing system communicate with applications in another computing system by using queues to insulate or abstract each other's differences. The sending application "connects" to a queue manager (a component of the MQF) and "opens" the local queue using the queue manager's queue definition (both the "connect" and "open" are executable "verbs" in a message queue series (MQSeries) application programming interface [API]). The application can then "put" the message on the queue. Before sending a message, an MQF typically commits the message to persistent storage, typically to a direct access storage device (DASD). Once the message is committed to persistent storage, the MQF sends the message via the communications stack to the recipient's complementary and remote MQF. The remote MQF commits the message to persistent storage and sends an acknowledgment to the sending MQF. The acknowledgment back to the sending queue manager permits it to delete the message from the sender's persistent storage. The message stays on the remote MQF's persistent storage until the receiving application indicates it has completed its processing of it. The queue definition indicates whether the remote MQF must trigger the receiving application or if the receiver will poll the queue on its own. The use of persistent storage facilitates recoverability. This is known as "persistent queue."
Eventually, the receiving application is informed of the message in its local queue (i.e., the remote queue with respect to the sending application), and it, like the sending application, "connects" to its local queue manager and "opens" the queue on which the message resides. The receiving application can then execute "get" or "browse" verbs to either read the message from the queue or just look at it.
When either application is done processing its queue, it is free to issue the "close" verb and "disconnect" from the queue manager. The persistent queue storage used by the MQF is logically an indexed sequential data set file. The messages are typically placed in the queue on a first-in, first-out (FIFO) basis, but the queue model also allows indexed access for browsing and the direct access of the messages in the queue.
Though MQF is helpful for many applications, current MQF and related software utilize considerable mainframe resources. Moreover, modern MQF's have limited, if any, functionality allowing shared queues to be supported.
Another type of adapter used to share data among mainframes or other computers in heterogeneous computing environments is described in U.S. Patent No. 5,906,658, issued May 25, 1999, entitled "Message Queuing on a Data Storage System Utilizing Message Queueing in Intended Recipient's Queue," by Raz. Raz provides, in one aspect, a method for transferring messages between a plurality of processes that are communicating with a data storage system, wherein the plurality of processes access the data storage system by using VO services. The data storage system is configured to provide a shared data storage area for the plurality of processes, wherein each of the plurality of processes is permitted to access the shared data storage region.
SUMMARY OF THE INVENTION
In U.S. Patent No. 6,141,701, Whitney addresses the problem that current MQF (message queuing facilities) and related software utilize considerable mainframe resources and costs associated therewith. By moving the MQF and related processing from the mainframe processor to an I/O adapter device, the I/O adapter device performs a conventional I/O function, but also includes MQF software, a communications stack, and other logic. The MQF software and the communications stack on the I/O adapter device are conventional.
Whitney further provides logic effectively serving as an interface to the MQF software. In particular, the I/O adapter device of Whitney includes a storage controller that has a processor and a memory. The controller receives 170 commands having corresponding addresses. The logic is responsive to the I O commands and determines whether an I/O command is within a first set of predetermined I/O commands. If so, the logic maps the I/O command to a corresponding message queue verb and queue to invoke the MQF. From this, the MQF may cooperate with the communications stack to send and receive information corresponding to the verb.
The problem with the solution offered by Whitney is similar to that of the adapter 205 (FIG. 2) in that the legacy applications of the mainframe must be rewritten to use the protocol of the MQF. This causes a company, such as an airline, that is not in the business of maintaining and upgrading legacy software to expend resources upgrading the mainframes to work with the MQF to communicate with today's open computer systems and to share data even among their own mainframes, which does not address the problems encountered when mainframes are located in different cities. The problem with the solution offered in U.S. Patent No. 5,906,658 by Raz is, as in the case of Whitney, legacy applications on mainframes must be rewritten in order to allow the plurality of processes to share data.
The present invention addresses the issue of having to rewrite legacy applications in mainframes by using the premise that mainframes have certain peripheral devices. For example, mainframes have tape drives, and, consequently, the legacy applications operating on the mainframes have the ability to read and write from tape drives. Therefore, the present invention addresses the problems and shortcomings of the prior art systems by providing a message queue server that emulates a tape drive that not only supports communication between two mainframes, but also provides a gateway to open systems computers, networks, and other similar message queue servers. In short, the principles of the present invention provide protocol-to-protocol conversion from mainframes to today's computing systems in a manner that does not require businesses that own the mainframes to rewrite legacy applications to share data with other mainframes and open systems. One aspect of the present invention is a system for protocol conversion. The system includes a device emulator coupled to a first device, such as a mainframe computer, having a first protocol. The system includes digital storage to temporarily store information from the first protocol. The system also includes at least one manager that (i) coordinates the transfer of the information of the first protocol between the device emulator and the digital storage and (ii) coordinates transfer of the information between the digital storage and a device having a second protocol.
Preferably, the device emulator is a tape drive emulator. Typically, the information is arranged in a queue in the digital storage. Another aspect of the present invention includes a manager for protocol conversion. The system includes at least one I/O manager having intelligence to support states of emulation devices transceiving messages using a first protocol and an interface transceiving messages using a second protocol. The system includes at least one emulation device providing low-level control reaction to an external device adhering to the first protocol. At least one group driver is included to provide an interface between the 170 manager and the emulation device(s). In one embodiment, the emulation device emulates a tape drive.
Yet another aspect of the present invention is a system for mainframe-to- mainframe connectivity. The system emulates a computer peripheral that is in communication with a first mainframe. The first device emulator acts as a standard sequential storage device. The system also includes a second device emulator in communication with a second mainframe. The second device emulator also acts as a standard sequential storage device. Digital storage is coupled to the first and second device emulators to store information temporarily for the first and second device emulators. The system also includes at least one manager that (i) coordinates a first transfer of information between the first device emulator and the digital storage and (ii) coordinates a second transfer of information from the digital storage to the second device emulator. The first and second mainframes have access to the information via respective device emulators. In one embodiment, the information stored in the digital storage is arranged in a queue.
Yet another aspect of the present invention includes a method and apparatus for managing messages in a data storage system. The data storage system receives information that is normally contained in a standard tape label. Based on the information, a controller applies the information to a non-tape memory designated for a message queue. The controller stores messages related to the information in the memory. The controller also manages the message queue as a function of the standard tape label information. Examples of standard tape label information that is acted on by the controller include: volume serial number, data set name, expiration date, security attributes, and data characteristics. The various aspects of the present invention can be used in a network environment. For example, data sharing between mainframes connected to the emulators, (e.g., tape drive emulators) can be located in a closed network containing two mainframes and the emulator protocol-to-protocol conversion system, where messages are transferred from one mainframe to the other mainframe by transferring messages to the memory supporting the emulators en route to the other mainframe. In a larger networking environment, the mainframes need not be in a closed network. In such a networking environment, the system includes a device emulator connecting to the mainframes, processor for executing software servicing message queues, memory for storing the message queues, and a network interface card, such as a TCP/IP interface card connecting to a TCP/IP network to transfer the messages in a packetized manner from the first mainframe to at least one other mainframe. In other words, once the messages are in the memory supporting the device emulators, the messages can be transferred to other memories supporting other device emulators via any middleware interface, commercial or customized, to transfer the messages to the other mainframe(s). Alternatively, the messages can be transferred to any open system computer or computer network.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
FIG. 1 is an illustration of an environment in which mainframe computers are used with computer tapes to share data among the mainframe computers;
FIG. 2 is a block diagram of a prior art solution to sharing data between mainframes without having to physically transport tapes between the mainframes, as in the environment of FIG. 1;
FIG. 3 is a block diagram in which a mainframe is able to share data with an open system computer network via a queue server according to the principles of the present invention;
FIG. 4 is a detailed block diagram of the queue server of FIG. 3;
FIG. 5 is a block diagram of an I/O manager, employed by the queue server of FIG. 4, having a device table database; FIG. 6 is an illustration of an environment in which the queue server of FIG. 3 is used by mainframes to share data; and
FIG. 7 is a block diagram of an environment in which mainframes are able to share data with other mainframes via the queue server of FIG. 3 over long distances through the use of wide area networks.
DETAILED DESCRIPTION OF THE INVENTION
A description of preferred embodiments of the invention follows. Fig. 3 is a block diagram of a mainframe 100a (Mainframe A) in communication with a queue server 300 employing the principles of the present invention. The queue server 300 is in communication with a computer network 350 (e.g., the Internet) and an open system computer 345.
Mainframe A has an operating system and legacy applications, such as applications written in COBOL. The operating system and legacy applications are not inherently capable of communicating with todays open systems computer networks and computers. Mainframe A, however, does have data useful to open systems and other mainframes (not shown), so the queue server 300 acts as a transfer agent between Mainframe A and computers connected to the open systems computer networks and computers. To transfer data between Mainframe A and the queue server 300, Mainframe
A provides data to a channel 312. The channel 312 includes three components: a communication link 305 and two interface cards 310, one located at Mainframe A and the other at the queue server 300. The interface card 310 located in the queue server may support block message transfers and non- olatile memory, as described in U.S . Provisional Patent Application No. 60/209,054, filed June 2, 2000, entitled "Enhanced EET-3 Channel Adapter Card," by Haulund et al. and co-pending U.S. Patent Application, filed concurrently herewith, entitled "," by Haulund et. al., the entire teachings of both are incorporated herein by reference. Mainframe A also receives information from the queue server 300 over the same channel 312. The channel 312 is basically transparent to Mainframe A and the queue server 300. Mainframes, such as Mainframe A, have traditional device peripherals that support the mainframes. For example, mainframes are capable of communicating with printers and tape drives. That means that the applications running on the operating system on Mainframe A have the "hooks" for communicating with a printer and tape drive. The queue server 300 takes advantage of this commonality among mainframes by providing an interface to the legacy applications with which they are already familiar. Here, the queue server 300 has a device emulator 315 that serves as a transceiver with the legacy applications via the channels 312. Thus, rather than "reinventing the wheel" by providing a MQF (message queue facility) that is a stand-alone device and requires legacy applications to be rewritten to communicate with them, the queue server 300 emulates a peripheral known to mainframes.
In one embodiment, the device emulator 315 is composed of multiple tape drive emulators 320. In actuality, the tape drive emulators 320 are merely software instances that interact with the interface cards 310. The tape drive emulators 320 provide low-level control reactions that adhere to the stringent timing requirements of traditional commercial tape drives that mainframes use to read and write data. In this way, the legacy applications are under the impression that they are simply reading and writing data from and to a tape drive, unaware that the data is being transferred to computers using other protocols.
In practice, the data received by the tape drive emulators 320 are provided to memory 330, as supported by a protocol transfer manager 325. Once in memory 330, the data provided by the legacy applications are then capable of being transferred to commercial messaging middleware 335. The commercial messaging middleware 335 is also supported by the protocol transfer manager 325, which supports read/write transactions of the commercial messaging middleware 335 with the memory 330 and higher-level administrative activities.
The commercial messaging middleware 335 interfaces with an interface card 338, such as a TCP/IP interface card, that connects to a modern computer network, such as the Internet 350, via any type of network line 340. For example, the network line 340 could be a fiber optic cable, local area network cable, wireless interface, etc. Further, a desktop computer 345 could be directly coupled to the commercial messaging middleware 338 via the network line 340 and TCP/IP interface card 338. In effect, the queue server 300 is solving the problem of getting data from mainframes into a standard, commercial environment that is easily accessed by today's commercial programs. The commercial programs may then use the data from the mainframes to publish, filter, or transform the data for later use by, for example, airline representatives, agents, or consumers who wish to access the data for flight planning or other reasons. The queue server 300 may also act as an interface between various operating systems of mainframes. For example, a TPF (transaction processing facility) mainframe operating system used for reservations and payment transactions can transfer the TPF data to a mainframe using a NM (virtual machine) mainframe operating system or to a mainframe using the MNS (multiple virtual storage) mainframe operating system. The queue server 300 allows data flow between the various mainframes by temporarily storing data in messages in persistent message queues in the memory 330. In other words, the memory 330 is not intended as a permanent storage location as in the case of a physical reel tape, but will retain the messages containing the data until instructed to discard them. The messages stored in the memory 330 are typically arranged in a queue in the same manner as messages are stored on a tape drive because the legacy systems are already programmed to store the data in that manner. Therefore, the legacy applications on the mainframes do not need to be rewritten in any way to transmit and receive data from the memory 330. The queue is logically an indexed sequential data set file, which may also use various queuing models, such as first-in, first out (FIFO); last-in, first out (LIFO); or priority queuing models. It should be understood that the memory 330 is very large (e.g., terabytes) to accommodate all the data that is usually stored on large computer tapes.
The data exchange between the mainframes can be done in near real-time or non-real time, depending on the length of the queue. For example, if the queue storing the messages has a length of one message, then the data exchange is near- real-time since the message is forwarded to the receiving mainframe once the queue is full with the one message. If the length of the queue is several hundred messages, then data from a first mainframe is written until the queue is filled, and then the data is transferred to the second mainframe in a typical tape drive-to-mainframe manner. The channel 312 typically transfers messages on a message-by-message basis. The memory 330, however, allows storage of many messages at a time, which allows the protocol transfer manager 325 to configure the tape drive emulators 320 in a mode supporting direct memory access (DMA) transfer of messages to improve data flow in the emulator-to-memory link of the data flow. Fig. 4 is a detailed block diagram of the queue server 300. The queue server
300 has (i) a front-end that includes adapter cards 310 and tape drive emulators 320, (ii) a protocol transfer manager 325 that include software processes, and (iii) a back- end that includes networking middleware 335 and network interface card 228, where the networking middleware 335 is connected to a network line 340 via the network interface card 338.
Referring first to the front-end of the queue server 300, the adapter cards 310 and tape drive emulators 320 compose a device emulator 315. As shown, a single tape drive emulator 320 is coupled to and supporting a single adapter card 310. However, because the tape drive emulator 320 is embodied as one or more software instances, there can be many tape drive emulators connecting to a single adapter card 310, and vise- versa. The tape drive emulator 320 and I/O manager 400 support the standard, channel command words provided by legacy applications operating on a mainframe, such as Mainframe A. For example, the channel command words include read, write, mount, dismount, and other tape drive commands that are normally used to control a tape drive. In an alternative embodiment, the tape drive emulators 320 emulate a different mainframe peripheral device; in that case, the tape drive emulators 320 support a different, respective, set of command words provided by the legacy applications for communicating with that different mainframe peripheral device. Referring next to the protocol transfer manager 325 of the queue server 300, located between I/O manager 400 and the tape drive emulators 320 is at least one group driver 405. The group drivers 405 are also software instances, as in the case of the tape drive emulators 320. The group drivers 405 are intended to off-load some of the processing required by the I/O manager 400 so that the I/O manager does not have to interface directly with each of the tape drive emulators 320. Each group driver 405 provides interface support for one or more associated tape drive emulator(s) 320 and the I/O manager 400. The group drivers 405 multiplex signals from the number of tape drive emulators 320 with which they are associated. Because the group drivers 405 are software instances, any number of group drivers 405 can be provided to support the tape drive emulators 320. Similarly, because the I/O manager 400 is a software instance, there can be many I/O managers 400 operating in the queue server 300. Thus, the protocol transfer manager 325 can be configured to provide parallel processing functionality for the mainframes and open systems being serviced.
It should be understood that the queue server 300 is composed of electronics that include computer processors on which the I/O manager 400, group drivers 405, tape drive emulators 320, and commercial messaging middleware 335 are executed. There may be several processors for parallel or distributed processing. The queue server 300 also includes other circuitry to allow the computer processors to interface with the adapter cards 310, memory 330, and TCP/IP interface card 338. The queue server 300 may include additional memory (not shown), such as RAM, ROM, and/or magnetic or optical disks to store the software listed above. The memory, both for the software and the queues is preferably local to the queue server 300, but may be remote and accessed over a local area network or wide area network, h the case of the queues, the delay in accessing the memory will cause additional latency in transferring the messages, but will not affect the interaction with the mainframes that require rapid response to requests since the tape drive emulators 320 handle that function.
Within the memory 330, the messages are stored as queues 415a, 415b, ..., 415« (collectively 415) in a volume 410, as in the case of a standard tape. The queues 415 are managed by using information that is normally contained in a standard tape label. For example, to build the queue name, the volume serial number and data set name is used in one embodiment. Another piece of data that is normally contained in a standard tape label is an expiration date, which allows the I/O manager 400 to decide how long to retain the message queue 415 in the memory 330. Security attributes found in a standard tape label are used by the I/O manager 400 to apply security attributes to the messages in the respective queues 415.
Other information contained in the standard tape label may be used by the J7O manager 400 to optimize the messages in the queue based on the data characteristics of the messages. Mounting the queue, which is done by selecting the pointer (i.e., software pointer storing the hexadecimal memory location) pointing to the head of the queue, is performed by the I/O manager 400 based on receiving a volume ID or data set name request message from the Mainframe A. It should be understood that the management features based on the standard tape label information just provided is merely exemplary of the types of actions that can be performed by the I/O manager 400 in managing the queues. Another feature, for example, is a tape mark action that marks an indicator within the associated message queue.
In operation, Mainframe A provides many commands to the queue server 300 for handling messages in queues. These commands are typical of communication with a real tape drive, but here, the tape drive emulators 320 receive the commands and either (i) provide fast response to Mainframe A in response to those commands or (ii) allow the commands to pass unfettered to the I/O manager 400 for administrative non-real-time processing. The following discussion provides write and read operations that occur during typical interaction between Mainframe A and the queue server 300.
Mainframe WRITE operation — scratch tape
Assuming the MVS Operating System is running Mainframe A, the Tape Volume Id is specified on a JCL (Job Control Library), which runs the job in question. Mainframe A initiates the tape operation by sending an LDD CCW (Load Display Device Channel Command Word), which identifies the specific tape to be mounted and the "device" on which to mount it. From the point of view of the Mainframe A, the "device" is a tape drive, which is being emulated by the tape drive emulator 320. This CCW (i.e., the 'command' sent on the channel 312) is received by the channel-to-channel adapter card 310 and intercepted by the tape drive emulator 320. The tape drive emulators 320 then sends notice to the group driver 405 via an interdriver control message (MOUNT_REQUEST_RECEiNED), which contains the information sent via the channel 312. In one embodiment, there is one message path between the tape driver 320 and the group driver 405 over which messages relating to the adapters cards 310 travel (i.e., the message path is multiplexed). The group driver 405 receives the message, determines its ultimate destination (i.e., the individual application or thread controlling the specific tape drive emulator 320 and queue 415), and places the message into a control message for delivery to the I/O manager 400, where the I/O manager 400 is the major component of the protocol transfer manager 325, also referred to as a SMART (system for message addressing routing and translation).
The I/O manager 400 uses the Tape Volumeld contained in the message to 'lookup' the queue associated with the Tape Volumeld. The I/O manager 400 uses the Virtual Tape Library (VTL - an internal process within the queue server 300) to perform this lookup function. The VTL uses a local database, described in reference to Fig. 5, to provide a mapping between the queuing engine's (i.e., I/O manager 400 and group driver 405) data message queues 415 (not to be confused with internal interdriver queues, not shown, between the tape drive emulators 320 and group drivers 405) and the tape Volumelds requested by the mainframe job. If the request is for a 'scratch' tape ID, the VTL assigns an arbitrary Id from its pool of preassigned IDs; if the request is for a specific ID, the specific ID is used. Regardless of the source, the ID is associated with a message queue (e.g., queue 415a). If the requested message queue 415a exists (i.e., the I/O manager 400 is reusing an existing queue), the requested message queue 415a is cleared of existing messages; otherwise, a new queue is created. The queue returned is associated (sometimes referred to as 'partnered' or
'married') with the mainframe making the mount request. The I/O manager 400 then notifies the group driver 405 to 'release' the mainframe/channel, which has been 'waiting' patiently for the channel/tape drive emulator to return 'OK' to its mount request. The group driver 405 formats and sends an interdriver 'release' message to the tape driver emulator 320, which issues the necessary channel commands to release the channel 312, Mainframe A, and itself for further activity.
Mainframe A most likely next sends a tape label (three short data records containing information about the data to be written) via the channel 312 to the tape drive emulator 320. This tape label information, packaged into an interdriver message (TAPE_LABEL_RECEIVED), is intercepted by the tape drive emulator 320 and sent to the group driver 405. The group driver 405 passes this tape label information to the I/O manager 400.
The tape label information is used to 'name' the associated message queue 415. The tape label information is then attached to the message queue 415 in the same way that tape label information is attached to a real (i.e., physical) tape volume. The information in the tape label remains with the message queue 415a and is 'played back' to Mainframe A when if the message queue 415a is read.
The I/O manager 400 notifies ('releases') Mainframe A by passing a message to the group driver 405, which sends the message to the tape driver 320, which notifies the channel 312, etc. Following the release, Mainframe A begins sending data messages as if it were sending the data messages to a real tape drive. These messages are placed, under software control, directly into the main shared memory buffer pools 330 (Fig. 3) (via hardware driven DMA - Direct Memory Access, controlled by dedicated hardware, such as IBM® EET® chips residing in the channel-to-channel adapter card 310), which are visible to the queue server 300 components. Preferably, data messages are not copied; only pointers to the internal shared buffers are moved as interdriver messages between the tape drive emulator 320 and group driver 405.
Pointers to data messages are passed as interdriver messages from the tape drive emulator 320 to the group driver 405 and are queued to the correct I/O manager 400. The I/O manager 400 reads the interdriver message queue (not shown), references the data message buffer (not shown), and moves the message to the associated message queue 415a. After the queue signals to the I/O manager 400 that the message is properly safe-stored, the I/O manager 400 notifies the tape drive emulator 320, via a message to the group driver 405, to release the channel 312 to Mainframe A. This sequence continues until Mainframe A sends a TAPEMARK (a special
CCW). The tape drive emulator 320 intercepts this CCW and passes it to the I O manager 400 as a control message via the group driver 405. After the I/O manager 400 receives the TAPEMARK, it closes the message queue 415 and disassociates it from the tape drive emulator 420. Mainframe A next sends a trailing label followed by REWIND (and/ or
UNLOAD) commands. The I/O manager is notified of the command and completes the disassociation of the tape drive emulator 320 and queue 415a. The I/O Manager 400 then recycles the tape drive emulator 320 for another mainframe request.
Mainframe READ operation READ operations differ very little from WRITE operations. The channel/mainframe first sends a request to mount a specific tape volume (e.g., volume 410a). The volume 410a and its associated queue 415a must exist. Lookup is performed by the VTL.
Once the I/O manager 400 associates the tape drive emulator 320 with the requested queue 415a, it passes the information from the stored label to the tape drive emulator 320, which presents it to the channel 312 in response to a READ CCW. (This simulates a real tape device presenting the real tape label from the tape.)
Once Mainframe A has 'read' and verified the label, it sends a series of READ CCWs. These are passed to the I/O manager 400 as control messages. Each read results in the I/O manager's 400 presenting the 'next' data message from the queue 415a to the tape drive emulator 320 for delivery to the channel 312.
When the last message is read from the queue 415a, the I/O manager 400 notifies the tape drive emulator 320, via a WRITE_TAPEMARK control command, and the tape drive emulator 320 simulates a TAPEMARK status to the channel 312. Mainframe A then initiates 'close' processing during which the I/O manager 400 disassociates the queue 410a and tape drive emulator 320.
Mainframe A then sends a REWIND or UNLOAD command via the channel 312. This is passed to the I/O manager 400, which completes the tape drive emulator 320 and queue 410a disassociation.
At that time, the tape drive emulator 320 enters an idle state and is available to be associated with another queue (e.g., queue 410b).
Fig. 5 is a block diagram of the I/O manager 400 and its associated device table database 500. The device table database 500 is used to initialize various components in the queue server 300. The device table database 500 includes a device name field, operation mode field, default channel configuration, queue name, file pointer name (pName), etc. These fields are (i) representative of the types of actions executed by a real tape drive and (ii) associated with actions requested of a real tape drive. The state of the fields in the device table database 500 configure the tape drive emulator 325 for interfacing with the commands/requests from the legacy applications in Mainframe A. Timing specifications, block size, date, time, labeled/not labeled, channel status, and other relevant information specific to the mainframes, mainframe operating system, or legacy applications are stored so as to respond to signals from the adapter cards 310 in a manner expected by the channels 312 and mainframes 100. The device table database 500 may also include information for configuring the adapter cards 310. Further, the device table database may include information for interfacing with the networking middleware 335 and/or TCP/IP card 338.
The device table database 500 is typically accessed during initialization of the queue server 300. For example, the device table database 500 may specify the number of tape drive emulators 320 that are used in the queue server 300 to support the adapter cards 310, the number of group drivers supporting the I/O manager 400 in communicating with the tape drive emulators 320, and the number of I/O managers 400 used by the queue server 300. The device table database 500 may also specify the locations of the volumes 410 within the memory 330 and queues 405 within the volumes 410 (Fig. 4). It should be understood that the device table database 500 can be expanded and upgraded, as necessary.
Fig. 6 is a block diagram of a closed network 600 in which the queue server 300 is used to provide protocol conversion among four mainframes. As shown, Mainframes A-D have channels coupling them to the queue server 300.
A queue 410 has been set up to store messages from Mainframe D. Following Mainframe D message storage, Mainframe A requests the messages in the queue 410.
Alternatively, Mainframe A may have requested data that the I/O manager 400 knows to be stored on a Mainframe D tape. The I/O manager may cause a message to be displayed to a technician to have the data loaded by Mainframe D and stored to a message queue 410 for retrieval by Mainframe A. h operation, Mainframe D writes data to the queue 410 in a manner typical of writing to a tape drive. Mainframe A reads the messages in the queue 410 in a manner typical of reading from a tape in a tape drive. As described above, the I/O manager 400 (Fig. 4) and group drivers 405 (Fig. 4) support the tape drive emulators 320 during the read and write processes. Thus, protocol A operating in Mainframe A receives data from protocol D in Mainframe D without having to rewrite legacy applications in either mainframe. This protocol conversion is supported by the commonality of the mainframes to interface with a tape drive, but which is supported by the emulation of a tape drive by the queue server 300. Note that if the length of the queue 410 is reduced to having a length of one message, then the protocol conversion from protocol D to protocol A is near real-time.
Fig. 7 is a block diagram of an exemplary open network 700 having several queue servers 300 supporting mainframes in various cities about the United States. The application here is an airline, Airline A, that wishes to make its mainframe data available to other mainframes around the country for various offices of airline representatives, agents, and consumers having connections to the open network 700.
In the open network 700, in Boston, Airline A has two mainframes 100a, 100b, connected to a queue server 300. As described above, the mainframes 100a, 100b, can share each other's data through the use of the associated queue server. Similarly, the mainframes 100a, 100b can share data with other mainframes via the queue server 300 and networking middleware 335 (Fig. 3). The queue server 300 is connected to a wide area network 350. The wide area network 350 is connected to another wide area network 350 (e.g., the Internet) and another queue server 300, which is located in New York.
The queue server 300 located in New York supports an associated mainframe lOOe, which is owned by Airline B. Airline B, may, for instance, be a subsidiary of Airline A or a business partner, such as an independent, international, airline affiliate. Personnel associated with Airline B may wish to access data from Airline A, such as passenger route information, transaction reports, etc.
Airline A also has a mainframe 100c in Chicago having an associated queue server 300 that provides connections to the wide area network 350, which provides connection to the queue server 300 in New York and distal connection to the queue server 300 in Boston. In this way, personnel in Chicago connected to the Chicago mainframe 100c have access to data in Boston and New York. Similarly, the personnel in Chicago have access to data stored on tapes or in the mainframes located in Denver, mainframe lOOd, and Los Angeles, mainframe lOOf.
In effect, the queue servers 300 provide protocol-to-protocol conversion between the protocols of operating systems running the mainframes 100a, 100b and network protocols, such as the TCP/IP protocols. Commercial subsystems are used where appropriate (e.g., commercial messaging middleware 335 and TCP/IP interface card 338) within the queue servers 300 so as to have the queue servers 300 be compatible with the latest and/or legacy open systems architectures.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims

CLAIMSWhat is claimed is:
1. Apparatus for protocol conversion, comprising: a device emulator coupled to a first device having a first protocol; digital storage coupled to the device emulator for temporary storage of information from the first protocol; at least one manager (i) coordinating the transfer of the information of the first protocol between the device emulator and the digital storage and (ii) coordinating transfer of the information between the digital storage and a second protocol.
2. The apparatus as claimed in Claim 1, wherein the device emulator is a tape drive emulator.
3. The apparatus as claimed in Claim 1, wherein the digital storage includes at least one of the following storage devices: magnetic disk, optical disk, and digital memory components.
4. The apparatus as claimed in Claim 1, wherein the manager manages input/output data between a mainframe computer and a commercial queuing system.
5. The apparatus as claimed in Claim 4, further including a group driver (i) supporting at least one pseudo-tape driver and (ii) interfacing with the digital storage.
6. The apparatus as claimed in Claim 1, further including a second device emulator coupled to the digital storage, wherein said at least one manager coordinates transfer of information between the two device emulators.
7. The apparatus as claimed in Claim 1, used in a wide area network to share data among multiple device emulators and at least two protocols.
8. The apparatus as claimed in Claim 1, used to transfer data between or among multiple mainframe computers.
9. The apparatus as claimed in Claim 1, wherein the information is arranged in a queue in the digital storage.
10. A method for protocol conversion, comprising: emulating a peripheral device to receive information from a first computer having a first protocol; temporarily storing the information; coordinating the transfer of the temporarily stored information having a first protocol to a second computer having a second protocol in a manner causing the information to take on characteristics of the second protocol.
11. The method as claimed in Claim 10, wherein said emulating includes emulating a tape drive.
12. The method as claimed in Claim 10, wherein storing the information includes writing the information to at least one of the following storage devices: magnetic disk, optical disk, and digital memory components.
13. The method as claimed in Claim 10, wherein said coordinating the transfer of the temporarily stored information includes managing input/output data between a mainframe computer and a commercial queuing system.
14. The method as claimed in Claim 13, further including directing the information from the first computer to a digital storage area used to temporarily store the information.
15. The method as claimed in Claim 10, further including emulating a second peripheral device and coordinating transfer of information between the first computer and the second computer separated by temporarily storing the information.
16. The method as claimed in Claim 10, used in a wide area network to share data among multiple computers using multiple protocols.
17. The method as claimed in Claim 10, used to transfer data between or among multiple mainframe computers.
18. The method as claimed in Claim 10, wherein the temporarily stored information is arranged in a queue.
19. In an apparatus for protocol conversion, a manager having distributed components, comprising: at least one I/O manager having intelligence to support states of (i) emulation devices transceiving messages using a first protocol and (ii) an interface transceiving messages using a second protocol; at least one emulation device providing low-level control reaction to an external device adhering to the first protocol; and at least one group driver to provide an interface between the I/O manager and said at least one emulation device.
20. The manager as claimed in Claim! 9, wherein said at least one group driver buffers data to allow for direct memory access (DMA) transfer.
21. The manager as claimed in Claim 19, wherein said at least one emulation device emulates at least one tape drive.
22. The manager as claimed in Claim 19, wherein the I/O application includes multiple input/output managers.
23. The manager as claimed in Claim 19, wherein the manager includes a sufficient number of emulation devices, group drivers, and I/O managers to maximize parallel processing performance of protocol conversion.
24. A method for protocol conversion, comprising: using an I O manager, transceiving messages with at least one first external device using a first protocol; using the I/O manager, transceiving the same messages with at least one second external device using a second protocol; emulating low-level control reactions to support the transceiving of the messages with the first external device in a manner that disassociates the I/O manager from the low-level control reactions; and channeling data flow between the I/O manager and said at least one first external device in a manner that minimizes interfacing by the I/O manager with said at least one first external device.
25. The method as claimed in Claim 24, further including buffering data to allow for direct memory access (DMA) transfers.
26. The method as claimed in Claim 24, wherein said emulating low-level control reactions is performed in a manner similar to that of a tape drive.
27. The method as claimed in Claim 24, further including channeling multiple data flows simultaneously by employing multiple I/O managers.
28. The method as claimed in Claim 24, further including a plurality of transceiving, emulating and channeling steps in a parallel manner to maximize parallel processing performance of the protocol conversion.
29. Apparatus for mainframe-to-mainframe connectivity, comprising: a first device emulator in communication with a first mainframe and acting as a standard sequential storage device; a second device emulator in communication with a second mainframe and also acting as a standard sequential storage device; digital storage coupled to the first and second device emulators to store information temporarily for the first and second device emulators; and at least one manager (i) coordinating a first transfer of information between the first device emulator and the digital storage and (ii) coordinating a second transfer of information from the digital storage to the second device emulator, the first and second mainframes having access to the information via respective device emulators.
30. The apparatus as claimed in Claim 29, wherein the information stored in the digital storage is arranged in a queue.
31. The apparatus as claimed in Claim 30, wherein the length of the queue is short to approach real-time protocol conversion.
32. The apparatus as claimed in Claim 30, wherein said manager dynamically adjusts the length of the queue.
33. The apparatus as claimed in Claim 29, further including a queue manager to support a case in which the first and second mainframes are not synchronized when transferring information via the apparatus.
34. The apparatus as claimed in Claim 29, wherein the second device emulator communicates with a second device emulator of a remote apparatus to transfer the information over a data network to provide remote connectivity between the first and second mainframes.
35. A method for providing mainframe-to-mainframe connectivity, comprising: assigning a first digital memory region external from a first mainframe to store messages in a sequential order for the first mainframe; assigning a second digital memory region external from a second mainframe to store messages in a sequential order for the second mainframe; emulating a device capable of communicating with the first and second mainframes to respond to requests from at least one of the mainframes; and in response to a request from at least one of the mainframes, establishing ,a link between the first and second digital memory regions to provide effective mainframe-to-mainframe connectivity between the first and second mainframes.
36. The method as claimed in Claim 35, further including storing messages from the mainframes in the digital memory region in a queue arrangement.
37. The method as claimed in Claim 36, wherein the length of the queue is short to approach real-time protocol conversion.
38. The method as claimed in Claim 36, further including dynamically adjusting the length of the queue.
39. The method as claimed in Claim 35, further including managing the queue to support a case in which the first and second mainframes are not synchronized when transferring information between the first and second mainframes.
40. The method as claimed in Claim 35, wherein emulating a device includes communicating with a remote process also emulating a device to transfer the information over a data network to provide remote connectivity between the first and second mainframes.
41. In a data storage system, a method for managing messages, comprising: receiving information that is normally contained in a standard tape label; based on the information, applying the information to a non-tape memory designated for a message queue; storing messages related to the information in the memory; and managing the message queue as a function of the standard tape label information.
42. The method as claimed in Claim 41, wherein the information normally contained in a standard tape label includes at least one of the following elements: volume serial number, data set name, expiration date, security attributes, and data characteristics.
43. The method as claimed in Claim 42, further including creating a queue name based on the volume serial number and data set name.
44. The method as claimed in Claim 42, further including deciding how long to maintain the message queue based on the expiration date.
45. The method as claimed in Claim 42, further including securing the message queue based on the security attributes.
46. The method as claimed in Claim 42, further including optimizing the message queue based on the data characteristics.
47. The method as claimed in Claim 42, further including mounting the message queue based on the volume serial number or data set name in response to receiving a request for either.
48. Apparatus for managing messages, comprising: a receiver to receive information from a computer that is normally contained in a standard tape label; and a controller that (i) applies the information to a non-tape memory, designated for a message queue, (ii) stores messages related to the information in the memory, (iii) manages the message queue as a function of the standard tape label information.
49. The apparatus as claimed in Claim 48, wherein the information normally contained in a standard tape label includes at least one of the following elements: volume serial number, data set name, expiration date, security attributes, and data characteristics.
50. The apparatus as claimed in Claim 49, wherein the controller creates a queue name based on the volume serial number and data set name.
51. The apparatus as claimed in Claim 49, wherein, based on the expiration date, the controller decides how long to maintain the message queue.
52. The apparatus as claimed in Claim 49, wherein, based on the security attributes, the controller secures the message queue.
53. The apparatus as claimed in Claim 49, wherein, based on the data characteristics, the controller optimizes the message queue.
54. The apparatus as claimed in Claim 49, wherein the controller mounts the message queue based on the volume serial number or data set named in response to receiving a request for either.
55. Apparatus for protocol conversion, comprising: means for interfacing with a computer having legacy applications; means for interfacing with an open system network; means for emulating a sequential storage device in a manner supported by the legacy applications; means for storing data being transferred between the computer and devices coupled to the open system network, said means for storing data interacting with said means for emulating a sequential storage device; and means for providing the computer and devices access to the stored data.
PCT/US2001/017858 2000-06-02 2001-06-01 Message queue server system WO2001095585A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001275151A AU2001275151A1 (en) 2000-06-02 2001-06-01 Message queue server system
CA002381189A CA2381189A1 (en) 2000-06-02 2001-06-01 Message queue server system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US20917300P 2000-06-02 2000-06-02
US60/209,173 2000-06-02

Publications (2)

Publication Number Publication Date
WO2001095585A2 true WO2001095585A2 (en) 2001-12-13
WO2001095585A3 WO2001095585A3 (en) 2002-07-25

Family

ID=22777661

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/017858 WO2001095585A2 (en) 2000-06-02 2001-06-01 Message queue server system

Country Status (4)

Country Link
US (1) US20020004835A1 (en)
AU (1) AU2001275151A1 (en)
CA (1) CA2381189A1 (en)
WO (1) WO2001095585A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007012582A1 (en) 2005-07-25 2007-02-01 International Business Machines Corporation Hardware device emulation
CN104125283A (en) * 2014-07-30 2014-10-29 中国银行股份有限公司 Message queue receiving method and system for cluster

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6766520B1 (en) * 2000-06-08 2004-07-20 Unisys Corporation Tape drive emulation software objects, and emulation of other peripheral systems for computers
EP2017901A1 (en) * 2001-09-03 2009-01-21 Panasonic Corporation Semiconductor light emitting device, light emitting apparatus and production method for semiconductor light emitting DEV
US20030110265A1 (en) * 2001-12-07 2003-06-12 Inrange Technologies Inc. Method and apparatus for providing a virtual shared device
JP2003337732A (en) * 2002-05-21 2003-11-28 Hitachi Ltd Data linking method and device
US7454529B2 (en) * 2002-08-02 2008-11-18 Netapp, Inc. Protectable data storage system and a method of protecting and/or managing a data storage system
US7437387B2 (en) * 2002-08-30 2008-10-14 Netapp, Inc. Method and system for providing a file system overlay
US7882081B2 (en) * 2002-08-30 2011-02-01 Netapp, Inc. Optimized disk repository for the storage and retrieval of mostly sequential data
US8024172B2 (en) * 2002-12-09 2011-09-20 Netapp, Inc. Method and system for emulating tape libraries
US7567993B2 (en) * 2002-12-09 2009-07-28 Netapp, Inc. Method and system for creating and using removable disk based copies of backup data
US6973369B2 (en) * 2003-03-12 2005-12-06 Alacritus, Inc. System and method for virtual vaulting
US7437492B2 (en) * 2003-05-14 2008-10-14 Netapp, Inc Method and system for data compression and compression estimation in a virtual tape library environment
US7613784B2 (en) * 2003-05-22 2009-11-03 Overland Storage, Inc. System and method for selectively transferring block data over a network
US7644118B2 (en) 2003-09-11 2010-01-05 International Business Machines Corporation Methods, systems, and media to enhance persistence of a message
US7523459B2 (en) * 2003-10-14 2009-04-21 Sprint Communications Company Lp System and method for managing messages on a queue
US7559088B2 (en) * 2004-02-04 2009-07-07 Netapp, Inc. Method and apparatus for deleting data upon expiration
US7426617B2 (en) * 2004-02-04 2008-09-16 Network Appliance, Inc. Method and system for synchronizing volumes in a continuous data protection system
US7904679B2 (en) * 2004-02-04 2011-03-08 Netapp, Inc. Method and apparatus for managing backup data
US7315965B2 (en) * 2004-02-04 2008-01-01 Network Appliance, Inc. Method and system for storing data using a continuous data protection system
US20050182910A1 (en) * 2004-02-04 2005-08-18 Alacritus, Inc. Method and system for adding redundancy to a continuous data protection system
US7720817B2 (en) 2004-02-04 2010-05-18 Netapp, Inc. Method and system for browsing objects on a protected volume in a continuous data protection system
US7490103B2 (en) * 2004-02-04 2009-02-10 Netapp, Inc. Method and system for backing up data
US7406488B2 (en) * 2004-02-04 2008-07-29 Netapp Method and system for maintaining data in a continuous data protection system
US7783606B2 (en) * 2004-02-04 2010-08-24 Netapp, Inc. Method and system for remote data recovery
US7325159B2 (en) * 2004-02-04 2008-01-29 Network Appliance, Inc. Method and system for data recovery in a continuous data protection system
US8028135B1 (en) 2004-09-01 2011-09-27 Netapp, Inc. Method and apparatus for maintaining compliant storage
US7774610B2 (en) * 2004-12-14 2010-08-10 Netapp, Inc. Method and apparatus for verifiably migrating WORM data
US7581118B2 (en) * 2004-12-14 2009-08-25 Netapp, Inc. Disk sanitization using encryption
US7401198B2 (en) * 2005-10-06 2008-07-15 Netapp Maximizing storage system throughput by measuring system performance metrics
US20070094402A1 (en) * 2005-10-17 2007-04-26 Stevenson Harold R Method, process and system for sharing data in a heterogeneous storage network
CN101900956A (en) * 2005-11-23 2010-12-01 Fsi国际公司 Remove the method for material from base material
US7752401B2 (en) 2006-01-25 2010-07-06 Netapp, Inc. Method and apparatus to automatically commit files to WORM status
US7650533B1 (en) 2006-04-20 2010-01-19 Netapp, Inc. Method and system for performing a restoration in a continuous data protection system
US8271258B2 (en) * 2007-03-30 2012-09-18 International Business Machines Corporation Emulated Z-series queued direct I/O
EP2091203A1 (en) * 2008-02-12 2009-08-19 Koninklijke KPN N.V. Method and system for transmitting a multimedia stream
US8959248B2 (en) * 2008-02-22 2015-02-17 Microsoft Corporation Personal computing environment with virtual computing device
US8534216B2 (en) 2008-06-10 2013-09-17 Roundpeg Innovations Pty Ltd. Device and method for boarding an aircraft
CN104598097A (en) * 2013-11-07 2015-05-06 腾讯科技(深圳)有限公司 Ordering method and device of instant messaging (IM) windows
US9898483B2 (en) * 2015-08-10 2018-02-20 American Express Travel Related Services Company, Inc. Systems, methods, and apparatuses for creating a shared file system between a mainframe and distributed systems
US20170220363A1 (en) * 2016-01-28 2017-08-03 Paul Francis Gorlinsky Mainframe system tape image data exchange between mainframe emulator system
US10007452B2 (en) 2016-08-19 2018-06-26 International Business Machines Corporation Self-expiring data in a virtual tape server
US10013193B2 (en) 2016-08-19 2018-07-03 International Business Machines Corporation Self-expiring data in a virtual tape server
US11132218B2 (en) 2018-12-28 2021-09-28 Paypal, Inc. Task execution with non-blocking calls
CN113656191A (en) * 2021-07-19 2021-11-16 中国电子科技集团公司第十五研究所 Historical message playback method and device of message middleware
US11922026B2 (en) 2022-02-16 2024-03-05 T-Mobile Usa, Inc. Preventing data loss in a filesystem by creating duplicates of data in parallel, such as charging data in a wireless telecommunications network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998040850A2 (en) * 1997-03-13 1998-09-17 Whitney Mark M A system for, and method of, off-loading network transactions from a mainframe to an intelligent input/output device, including off-loading message queuing facilities
US5933584A (en) * 1993-03-13 1999-08-03 Ricoh Company, Ltd. Network system for unified business
WO1999067706A1 (en) * 1998-06-24 1999-12-29 Unisys Corporation System for high speed continuous file transfer processing
WO2000002124A2 (en) * 1998-07-01 2000-01-13 Storage Technology Corporation Method for verifying availability of data space in virtual tape system
EP0982650A1 (en) * 1996-03-22 2000-03-01 Hitachi, Ltd. Printing system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5109515A (en) * 1987-09-28 1992-04-28 At&T Bell Laboratories User and application program transparent resource sharing multiple computer interface architecture with kernel process level transfer of user requested services
US5455926A (en) * 1988-04-05 1995-10-03 Data/Ware Development, Inc. Virtual addressing of optical storage media as magnetic tape equivalents
US5706286A (en) * 1995-04-19 1998-01-06 Mci Communications Corporation SS7 gateway
US5717951A (en) * 1995-08-07 1998-02-10 Yabumoto; Kan W. Method for storing and retrieving information on a magnetic storage medium via data blocks of variable sizes
US5906658A (en) * 1996-03-19 1999-05-25 Emc Corporation Message queuing on a data storage system utilizing message queuing in intended recipient's queue
US6148377A (en) * 1996-11-22 2000-11-14 Mangosoft Corporation Shared memory computer networks
US6665714B1 (en) * 1999-06-30 2003-12-16 Emc Corporation Method and apparatus for determining an identity of a network device
US6640247B1 (en) * 1999-12-13 2003-10-28 International Business Machines Corporation Restartable computer database message processing
US6718372B1 (en) * 2000-01-07 2004-04-06 Emc Corporation Methods and apparatus for providing access by a first computing system to data stored in a shared storage device managed by a second computing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933584A (en) * 1993-03-13 1999-08-03 Ricoh Company, Ltd. Network system for unified business
EP0982650A1 (en) * 1996-03-22 2000-03-01 Hitachi, Ltd. Printing system
WO1998040850A2 (en) * 1997-03-13 1998-09-17 Whitney Mark M A system for, and method of, off-loading network transactions from a mainframe to an intelligent input/output device, including off-loading message queuing facilities
WO1999067706A1 (en) * 1998-06-24 1999-12-29 Unisys Corporation System for high speed continuous file transfer processing
WO2000002124A2 (en) * 1998-07-01 2000-01-13 Storage Technology Corporation Method for verifying availability of data space in virtual tape system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DIPAK GHOSAL ET AL: "HIGH-SPEED PROTOCOL PROCESSING USING PARALLEL ARCHITECTURES" PROCEEDINGS OF THE CONFERENCE ON COMPUTER COMMUNICATIONS (INFOCOM). TORONTO, JUNE 12 - 16, 1994, LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. 1, 12 June 1994 (1994-06-12), pages 159-166, XP000496464 ISBN: 0-8186-5572-0 *
NEWMAN P: "Backward explicit congestion notification for ATM local area networks" GLOBAL TELECOMMUNICATIONS CONFERENCE, 1993, INCLUDING A COMMUNICATIONS THEORY MINI-CONFERENCE. TECHNICAL PROGRAM CONFERENCE RECORD, IEEE IN HOUSTON. GLOBECOM '93., IEEE HOUSTON, TX, USA 29 NOV.-2 DEC. 1993, NEW YORK, NY, USA,IEEE, 29 November 1993 (1993-11-29), pages 719-723, XP010109755 ISBN: 0-7803-0917-0 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007012582A1 (en) 2005-07-25 2007-02-01 International Business Machines Corporation Hardware device emulation
US7843961B2 (en) 2005-07-25 2010-11-30 International Business Machines Corporation Hardware device emulation
CN104125283A (en) * 2014-07-30 2014-10-29 中国银行股份有限公司 Message queue receiving method and system for cluster

Also Published As

Publication number Publication date
WO2001095585A3 (en) 2002-07-25
AU2001275151A1 (en) 2001-12-17
CA2381189A1 (en) 2001-12-13
US20020004835A1 (en) 2002-01-10

Similar Documents

Publication Publication Date Title
US20020004835A1 (en) Message queue server system
US5265252A (en) Device driver system having generic operating system interface
JP2677744B2 (en) Distributed memory digital computing system
US6065087A (en) Architecture for a high-performance network/bus multiplexer interconnecting a network and a bus that transport data using multiple protocols
US6094605A (en) Virtual automated cartridge system
EP0993636B1 (en) Dos based application supports for a controllerless modem
US5613155A (en) Bundling client write requests in a server
JP4410557B2 (en) Method and system for accessing a tape device in a computer system
US5327558A (en) Method for asynchronous application communication
US5305461A (en) Method of transparently interconnecting message passing systems
US5664145A (en) Apparatus and method for transferring data in a data storage subsystems wherein a multi-sector data transfer order is executed while a subsequent order is issued
US4918595A (en) Subsystem input service for dynamically scheduling work for a computer system
US6170045B1 (en) Cross-system data piping using an external shared memory
KR19990082226A (en) Application programming interface for data management and bus management over the bus structure
US4470115A (en) Input/output method
US7624156B1 (en) Method and system for communication between memory regions
US5721920A (en) Method and system for providing a state oriented and event driven environment
US7155492B2 (en) Method and system for caching network data
US6092166A (en) Cross-system data piping method using an external shared memory
US6108694A (en) Memory disk sharing method and its implementing apparatus
US5574946A (en) Data transmission system using independent adaptation processes associated with storage unit types for directly converting operating system requests to SCSI commands
US20030110265A1 (en) Method and apparatus for providing a virtual shared device
US20020002631A1 (en) Enhanced channel adapter
JP2901882B2 (en) Computer system and method of issuing input / output instructions
US6061771A (en) Cross-system data piping system using an external shared memory

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2381189

Country of ref document: CA

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP