During installation of the zoom client the error code 10003 displays on the computer – none:. US20100135643A1 – Streaming non-continuous video data – Google Patents

Looking for:

During installation of the zoom client the error code 10003 displays on the computer – none:.Zoom installation error – 10006

Click here to ENTER


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Traffic Alert/Advisory System im in ar SKY Installation Manual Pr el This SKY System COMPONENT PART NUMBER TRC Transmitter Receiver Computer. BCN Fixed an issue that caused an “error in the main process” message when trying to open BA connected v from a computer with no. errors in programs supplied by SpecView Corporation. Quick Start – Automatically detect instruments and display instrument views.
 
 

USA1 – Streaming non-continuous video data – Google Patents.

 

The PTZ element can be used to specify a custom camera position, which does not correspond to any of the presets. A LIGHT element indicates whether backlight compensation associated with a particular camera – is to be turned on while the event is triggered. A non-zero value for the LIGHT element indicates that the backlight compensation is to be turned on while a value of zero indicates that the backlight compensation is to be off.

The storage server configuration file contains the information necessary to run an individual storage server e. The storage server configuration file is read by the recording engine when the recording engine begins to execute. The storage server configuration file is rewritten every time the configuration of the recording engine is changed by an administrator.

The storage server configuration tool can also be used to rewrite the storage server configuration file , and sends an inter-process communication IPC message to the recording engine every time the configuration file is changed, in order to instruct the recording engine to determine the general settings of the storage server again. The storage server users file defines which users have access to the recording engine via the web interface , and can be modified using the storage server configuration tool , which will be described in detail below.

The storage server user file is a text file consisting of a set of entries, one per line. Each entry consists of a user name identifying the user , a realm i. The storage server groups file is used to define which users listed in the storage server users file have administrative privileges. The storage server groups file is also generated and modified by the storage server configuration tool discussed below. The video and event data stored by the recording engine is distributed among a set of files.

All video data received from a camera – is stored in video files. Event data can be collected from a variety of sources including camera servers – , viewers , and internal storage server events. Event data is stored in the form of event records in a separate event file. Event data can also be inserted into text samples, which are stored together with received video frames in the video files.

In a video monitoring and recording system, it is advantageous to have recorded video in files using standard or de facto standard formats so that it is convenient for users to extract video from the storage server for playing or processing using third party programs such as commonly used media players. Also, in some situations, it may be advantageous to maintain the data in smaller chunks associated with different durations of time.

In these situations, the smaller chunks facilitate finer-grained control over the deletion of video sample data and enables easier access to the stored data by third-party tools.

The storage server maintains distinct sets of video files and event files within the storage device including internal, external and network accessible drives. There is a set of files associated with each of the cameras – for a specific duration of time, consisting of a media file and an index file , which are collectively referred to herein as video files, and an event file.

The first file , referred to as a media file, is used to store the video sample data captured by the cameras – Such media files can store video sample data as samples representing video frames and text samples representing time stamps and event notifications. The second file of the pair of video files is referred to as an index file. The index file stores information which instructs a media player application e. The formats of the files and will be described in detail in section 3. Although an implementation is described herein where media samples and index information such as timing and resolution are in separate files, in another implementation media samples and index information can be configured within a common file.

The data files corresponding to each of the camera servers – are stored in a subdirectory of a drive selected to store files for that camera server – Each of the camera servers – that is recording at a given time has one pair of standby files i. The standby files are named according to the following template 1 :. The character immediately following the word Standby is set to either 0 or 1, and indicates which standby file i.

As will be described below, there can be two standby files i. The cameraid section of the template 1 is a character data string that represents a unique identifier of a particular camera server – given to the particular camera server – by the recording engine when the particular camera server – was first added to the system The cameraid section is represented in hexadecimal format, with lower case letters representing the values A to F.

The creationdate section of the template 1 represents the date at which the associated standby file was created, in Coordinated Universal Time UTC. The format creationdate section is yyyymmdd. The creationtime section of the template 1 represents the time at which the standby file was created, in universal time convention. The format of the creationtime section is hhmmss. The count section of the template 1 represents the sequential number of the associated standby files, counting from a standby file representing the beginning of a current recording session by the recording engine A separate counter can be maintained for each camera server – in the system If more than one standby file is created with the same camera identifier, creation date and time i.

The recording engine attempts to maintain one inactive standby file i. The recording engine maintains one inactive standby file to ensure that any overhead incurred when creating a set of standby files does not delay the writing of sample data to these files, by having a file ready to record to when a file swapover occurs. Thus, when a standby file is moved to an active state by having a sample written to the corresponding media file of the standby file, a new inactive standby file is created in a parallel operation.

A standby file pair i. When a standby file pair is completed, the standby file pair is closed and renamed as a completed file pair according to the following template 2 :. As before, the cameraid section of the template 2 is a character string that represents the unique identifier of a particular camera server – given to the particular camera server – by the recording engine when the particular camera server – was first added to the system The value of the cameraid section is represented in hexadecimal format, with lower case letters representing the values A to F.

The startdate section of the above template represents the date at which the first sample of the standby file pair was recorded, in universal time convention format. The startdate section format is yyyymmdd. The starttime section represents the time at which the first sample of the file was recorded, in universal time convention format.

The format of the starttime section is hhmmss. The startdate and starttime do not necessarily represent the same values as the creationdate and creationtime used in the standby files.

The enddate section represents the date at which the standby file pair was completed, in universal time convention format. The format of the enddate section is yyyymmdd. The endtime section represents the time at which the standby file pair was completed, in universal time convention format. The format of the endtime section is hhmmss.

The naming format of the above files allows the access engine to determine whether a media file and associated index file representing a particular moment in time is present. The naming format also makes it possible for third party software applications to automatically handle or process the files eg for automated periodic copying to archival storage without having to check any form of central index of files.

The naming format also avoids the possibility of inconsistency between a central index and the actual files which could occur if a central file index was maintained by the recording engine As described below, the index file contains one or more tracks and Each track contains a reference e. As such, in another implementation, an index file having more than one track can point to several media files e. For example, in the instance where one media file e. In one implementation, one media file e.

For example, in the instance where one media file stores samples i. Each atom has an associated type code which specifies the kind of data stored in the associated atom. An atom, referred to as a track atom, stores each track e. A track associated with a particular track atom contains a pointer e. The tracks and store offsets to each sample e.

Each of the tracks and of the index file also contains a sample size table and , respectively. The sample size tables and store the size of the samples contained in the associated media file If there is a system failure, data added to the index file after the last time that the file was flushed will be lost from the index file The format of the media file is flexible.

Each individual sample e. The following sections describe a preferred format for the media file , which is supported by the described arrangements.

However, a person skilled in the relevant art would appreciate that any suitable file format can be used for the media file Thus, changes in frame rate or late samples due to network lag can result in the playback rate inaccurately portraying the passage of time. When the media file is first created, a set of headers is generated.

Further, individual stream headers are generated to represent each track within the associated index file that points to the given media data file An empty sample list structure is also created to store each individual sample e.

When the media file is closed, the headers are updated to reflect the number of samples that were stored within the media file When inserting a sample e. The stored sample has a header indicating which data stream the corresponding sample belongs to and the size of the sample. The CNVD chunk contains additional information, which can be used to recover the media file and the associated index file In particular, the CNVD chunk contains information including the timestamp i.

Each video sample of the sample data captured by the camera servers – is preferably configured as a separate JPEG file, which is inserted as a separate sample i. The first copy of the text string is a null-terminated version of the text string. The second copy of the text string is placed immediately after the null character terminating the first copy. As an example, FIG. The event files corresponding to each camera server – are stored in a subdirectory of a drive configured within the storage device and selected to store event files for that camera server – The recording engine creates an event file corresponding to each video file pair i.

The matching of an event file with video file pair is useful in situations where video and event data is moved to archival storage, as the video and event data can be both copied or moved together and later can both be replaced together. Such matching ensures that the video data associated with specific events will be available if a user attempts to access information about the events, and that event data associated with specific video will be available if a user tries to access specific video.

Each event file can be named according to the following template:. The cameraid section of the template 3 is a character string that represents the unique identifier of a particular camera server – given to the particular camera – by the recording engine when the particular camera server – was first added to the system The value of the cameraid of the template section 3 is represented in hexadecimal format, with lower case letters representing the values A to F.

The creationdate section of the template 3 represents the date at which the event file was created, in UTC. The format of the creationdate is yyyymmdd. The creationtime section of the template 3 represents the time at which the event file was created, in UTC. The count section of the template 3 represents a sequential number corresponding to the event file, counting from an event file representing the beginning of a current recording session by the recording engine If more than one file is created with the same camera identifier, creation date and time i.

In addition to these event files, there is a system event file that contains events associated with the storage server but not necessarily associated with a particular camera server – These include events that indicate the startup or shutdown of the recording engine , or remote users logging in to the system on a viewer to retrieve viewer configuration and layout files, as will be explained in more detail below.

The event file is a text file, encoded in UTF8 format. Each line of the event file contains an event record describing a single event, and consists of several fields, delimited by a colon, as follows:. The datetime field of the event file line 4 describes the date and time at which the event was recorded. The datetime field is in the standard format that is generated by the ANSI function ctime , with an additional millisecond field inserted.

An example of such a time is: Tue Aug 19 The cameraid field describes the identifier of the camera sever – associated with the event. The type field of the event file line 4 comprises a four letter code defining the type of the event, followed by either a zero i. Certain events, such as a user logging on, do not have an OFF event. Table 6 below defines all the available event types:.

The priority field of the event file line 4 above contains a value between one i. The final field of the event file line 4 , description, may optionally contain additional colons as the end of the field is located using the newline character. The configuration management module reads and writes the storage server configuration file, and provides a mechanism to allow other applications to signal the recording engine when they modify the storage server configuration file.

The process is preferably implemented as software resident on the hard disk drive and controlled in its execution by the processor The process waits on a synchronization object that can be signaled by other applications. In this instance, the synchronisation object used is a named Event object. The process updates the General section of the configuration file. If camera servers – or cameras – were also added, modified or deleted, these changes are ignored. The process begins at the first step where the processor creates a synchronisation object that can be accessed by other applications, setting the name and security attributes of the synchronisation object.

The processor then enters a loop, waiting for the synchronisation object to be triggered by another application. After the object is triggered at step , the Storage Server Configuration File is read at the next step At the next step , the process updates the internal variables that correspond to a General section of the Storage Server Configuration File, before returning to step where a new synchronisation event is sought.

The video file management module of the recording engine manages all of the video files i. The video file management module is responsible for creating new video files as required, writing video sample data to the video files, and swapping to a standby file when the size or duration of an existing video file exceeds a set limit. The video file management module comprises a video file management thread or subroutine.

The video file management thread is responsible for creating and closing video files. The creation and closing operations can be relatively time consuming, and are thus performed by a separate thread i. As described above, the recording engine attempts to ensure that an inactive standby file i.

As soon as a first sample is written to a standby file, the file management thread is instructed to create a new inactive standby file, ready for the next swapover.

A process for creating and closing video files in accordance with the video file management thread is described below with reference to FIG. The process is preferably implemented as software resident on the hard disk drive of the storage server and being executed by the processor The process begins at step , where the processor waits until a message from another thread is detected.

The message can be one of three types as will be described below. Otherwise, the process proceeds to step At step , the processor enters a create video file set process , in order to create a media file and associated index file The process will be described in detail below in section 3.

The process then returns to step Otherwise the process proceeds to step At step , the process enters a close video file set process , in order to close the media file and index file to create a completed file pair.

The process will be described in section 3. Otherwise the process returns to step The create video file set process will now be described with reference to the flow diagram of FIG.

The process is called by the video file management thread. The purpose of the process is to create a set of standby files, each consisting of one media file and an associated index file The process is preferably implemented as software resident in the hard disk drive and being controlled in its execution by the processor The process begins at step where the processor acquires a synchronisation object that is shared with the access engine The synchronisation object is stored on the hard disk drive and is used to ensure that creation, opening, closure and deletion of media and index files are performed atomically with respect to each of the recording engine and the access engine In the next step , the processor generates a set of suitable standby file names, according to the specification in section 3.

At the next step , the processor increments the file counter for the given camera – in preparation for the next standby file that is to be generated. Then at the next step , the processor creates the event file in the hard disk drive The event file is created in accordance with a create event file process which will be described below with reference to FIG.

At the next step , the processor creates the media file in the hard disk drive The media file is created at step in accordance with a process for creating a media file The process will be explained in detail below with reference to FIG.

The media file is created according to a media file path parameter and a media file format parameter, which are supplied by the processor The media file format parameter specifies the file format e. Then at the next step , the processor creates the index file in the hard disk drive The index file is generated based on an index file path parameter, which is supplied by the processor The index file is created at step in accordance with a process for creating an index file The process will be described in detail below with reference to FIG.

The process continues at the next step where the processor creates a first track e. The track created at step e. Also at step , the processor adds the video track to the index file configured within the hard disk drive Then at the next step , the processor creates a second track e.

The second track is configured to reference text strings i. Also at step , the processor adds the created text track to the index file The process concludes at the next step where the synchronisation object obtained in step is released thereby allowing the access engine to operate on these files. The create media file process of step will now be explained with reference to the flow diagram of FIG. The process begins at step where the processor creates the media file at a memory address, in the hard disk drive , as specified by the media file path parameter supplied by the processor at step Otherwise, the process concludes.

At step , the processor creates an empty set of headers and an empty sample structure to store the samples detected by the processor Then at step , the created sample structures are written to the media file configured within the hard disk drive The index file structure can be stored either in the hard disk drive or in a temporary file configured within memory , since the index file structure will be appended to the end of the media file , the resultant length of which is currently unknown.

The process of creating an index file will now be explained with reference to the flow diagram of FIG. The process begins at step where the processor creates the index file at a memory address, in the hard disk drive , as specified by the index file path parameter supplied by the processor at step Then at the next step , the processor initialises a movie header atom for the index file The process concludes at the next step , where the processor initialises a user data atom for the index file The close video file pair process is called by the video file management thread.

The purpose of the process is to take an active open pair of files, close the files and rename the files to the appropriate file names. The process begins at step where the processor acquires the synchronisation object that is shared with the access engine As described above the synchronisation object is used to ensure that creation, opening, closure and deletion of the media file and index file forming the file pair are performed automatically with respect to each of the recording engine and the access engine At the next step , the processor determines whether the active file pair actually contain samples.

If no samples are present at step , then the file pair store no useful data and the process proceeds to step Otherwise, if samples are present within the standby file pair, a set of new file names is generated in accordance with the specification described above in section 3.

At the next step , the media file of the file pair is closed in accordance with a complete media file process , which will be described in detail below with reference to FIG.

Then at the next step the media file is renamed. The process continues at the next step where the media file reference in the associated index file i. At the next step , the active index file is closed in accordance with a complete index file process , which will be described in detail below with reference to FIG.

Then at the next step the index file is renamed. At step , the media file is closed in accordance with the complete media file process Since the media file contains no useful data, the media file is deleted from the hard disk drive at the next step The process continues at the next step , where the index file of the file pair is closed in accordance with the complete index file process Since the index file contains no useful data, the index file is deleted from the hard disk drive at the next step At step , the process closes the event file belonging to the current file set by calling the event file process The process concludes at the final step , where the synchronisation object obtained in step is released.

The process as executed at steps and will now be described with reference to the flow diagram of FIG. The process performs any cleanup tasks required to ensure that a media file is not corrupt and playable, and then closes the media file Otherwise, the process proceeds to step where any previously unwritten media file data in memory is flushed to the media file configured within the hard disk drive After step the process proceeds directly to step The headers are updated to reflect the exact number of samples stored in the media file The process concludes at the next step where the media file is closed to create a completed media file The complete index file process as executed at steps and will now be described with reference to the flow diagram of FIG.

The process begins at the first step where the processor sets the end time of the index file to an index end time value supplied by the processor The end time of the index file is set to cover the period from the first sample of the media file to the current time represented by a system clock of the storage server At the next step , the processor selects a first track of the index file Then at step if the processor determines that there are any remaining tracks of the index file to be processed, then the process proceeds to step At step , the end time for the selected track is set to the index end time supplied by the processor The next track of the index file is then selected by the processor at step and the process returns to step At step , the processor flushes any previously unwritten index file data in memory to the index file configured in the hard disk drive The process then concludes at the next step , where the index file is closed.

The process takes a sample representing a single video data sample from a frame buffer configured within memory , and ensures that the sample is written to the media file The associated index file is then updated to reference the newly written sample. The process begins at the first step where if the current rate of saving sample data is the same as the rate of that the storage server is acquiring sample data, then the process proceeds to step Otherwise the process proceeds to step where a capture time i.

That is, at step , the save rate counter is incremented by the difference between the time of the current sample i. At the next step if the value of the save rate counter exceeds the current save rate, then the process proceeds to step Otherwise the process concludes.

At step , the processor resets the save rate counter by subtracting the current save rate from the value of the save rate counter. In the next step , the processor determines whether there is enough disk space to write the current frame to the disk associated with the camera – If there is not enough room, the process concludes. Then at step the processor ensures that there is space to store the sample to be written to the media file in accordance with a check video file limits process which will be described in section 3.

At the next step , if there are no samples in the media file , then the process proceeds to step Otherwise, the process proceeds directly to step At step , the processor sends a request to the video file management thread, executing on the storage server , to create a new inactive standby file i.

At step , the sample is written to a video track of the media file The sample is written at step in accordance with a write frame to video track process which will be explained in detail below with reference to FIG. The process concludes at the next step , where the properties of the sample are written in the form of a text string to a text track of the media file in accordance with a write frame properties to text track process The process of checking video file limits will now be described reference to FIG.

The process is preferably implemented as software resident on the hard disk drive and being controlled in its execution by the processor The process determines whether an active file pair for a camera server – i. If no more samples can be accommodated, the processor switches to an inactive standby file for that camera server – , and issues a request to close the previous active media file pair. The process begins at the first step , where the processor checks the index file corresponding to the media file of the active file pair.

If the index file has free entries in the corresponding sample table structure to accommodate a new sample then the process proceeds to step At step , if the capacity of the currently active media file is less than a predetermined limit e. At step , if the processor determines that the system is configured by a user to limit media file sizes then the process proceeds to step At step , if the processor determines that the currently active media file is smaller than the user specified limit then the process proceeds to step At step , if the processor determines that the system is configured by the user to limit media file duration, then the process proceeds to step At step , if the processor determines that the duration of the currently active media file is smaller than the user specified limit then the process concludes.

At step , the processor initiates a file swapover by setting the currently active media file to an inactive standby media file. Then at the next step , the processor sends a CLOSE FILE request to the video file management thread in order to close the media file and the process concludes. The process of writing a sample to a video track as executed at step will now be explained with reference to FIG. At the first step , if there are already samples present in the video track of the active media file then the process proceeds to step At step , if the format and resolution of the new samples are not the same as the current sample s in the media file then the process proceeds to step Otherwise the process proceeds directly to step At step , the processor reconfigures a video track of the media file to accept a sample using the correct data format and resolution.

The process concludes at step where the sample is added to the video track in accordance with a process for adding a sample to a track, which will be described in detail below with reference to FIG. The process for writing sample properties to a text track will now be described with reference to FIG.

The process ensures that time and event information is written to the text track of a media file configured within the hard disk drive of the storage server The process begins at the first step where the processor accesses a timestamp indicating the acquisition time associated with a sample to be written, and generates a string of the format that is generated by the ANSI function ctime , with an additional millisecond field inserted.

At the next step , if there are any events for the associated camera server – that have not been saved to the current active media file then the process proceeds to step At step , the processor pulls off a next event from an event queue configured in memory , in turn, and the description of the event is appended to the string generated at step The process concludes at step , where the text string is added to the text track in accordance with the process The process for adding a sample to a track as executed at steps and will now be explained with reference to FIG.

The process takes a video sample or a text sample i. The process begins at step where the sample is added to the active media file The sample is added at step in accordance with an add sample to media file process , which will be described in detail below with reference to FIG.

At the next step , the sample offset indicated by a sample offset parameter and sample size parameter of the sample to be added, is written to the sample table structure i. At the next step , if the sample is the first sample to be associated with the track then the process continues at the next step At step , the processor sets the starting time of the track to the timestamp of the current sample using the sample recording time parameter i.

Otherwise, the processor determines the duration of the previous sample by subtracting the timestamp i. Then at the next step , a track duration value associated with the relevant track i. At the next step , if the track duration causes the end of the track to exceed the index duration, then the process proceeds to step Otherwise, the process of step concludes.

At step , the index duration is updated to accommodate the track i. The process of adding a sample to a media file will now be explained with reference to FIG. Otherwise, the process proceeds directly to step , where the data of the detected sample is appended to a media file configured within the hard disk drive At step , if the sample to be added to the media file is empty i.

At step , if the track being written is a text track, then the process proceeds to step At the next step , the original data of the detected sample is appended as a data string to the null terminated copy of the string. Then at step , the processor creates a custom data chunk containing a timestamp based on the detected sample acquisition time and writes the custom data chunk to the media file configured within the hard disk drive At step , the processor writes a data chunk based on the detected sample to the media file As described above, the recording engine includes an event management module for managing all of the event files stored on the drives configured within the storage device of the storage server The event management module is responsible for creating new event files as required, writing data to these new event files, and swapping to further new event files when the size or duration of an existing event file exceeds a predetermined limit.

The process creates a standby media file The process begins at the first step where the processor uses functionality provided by the operating system to create a file using the path generated at step of the create file pair process The process begins at the first step , where an event record is created according to the format described above. At the next step , the event record is written to a currently open event file configured within the storage device or a currently open system event file.

The process begins at the first step where the processor generates a file set stopping event using the generate event process. The process concludes at the next step where the event file is closed. As described above, the camera server communications module manages communications between the recording engine and the camera servers – that the recording engine controls.

The communications module is responsible for ensuring that correct settings are sent to a particular camera server – , and receiving samples and event notifications as required. As such, the camera server communications module is responsible for maintaining connections to the camera servers – , and the exchange of all data between the recording engine and each of the camera servers – The recording engine connects to the camera server – and issues an HTTP request, requesting a specific resource depending on the operation that is to be performed.

Most requests supported by the camera servers – prompt an immediate reply. An exception to this is a get notice command, which leaves the connection open after a request until the camera server – has a new notification that can be sent to the recording engine A separate image socket is used to receive video sample data from each camera – that is connected to the particular camera server – Once started, images are sent from the camera server – to the recording engine in a stream which can only be terminated by closing the image socket.

The camera server – maintains a set of connection identifiers which are used to represent active client sessions, and which are to be supplied together with certain requests.

The control and notification sockets share a single connection identifier, while the image socket does not use an explicitly created one. Socket communication is performed asynchronously. That is, the recording engine notifies the operating system that the recording engine is attempting to perform an operation to connect or write to a socket, and the operating system subsequently informs the recording engine when the connection is complete, or when the operating system is ready to accept data to write.

Additionally, the recording engine registers itself to receive from the operating system read notifications whenever new incoming data is available on the socket, and disconnection notifications, if the socket is disconnected by the other end, or a timeout occurs. The process is executed whenever a camera server – is initialized in the recording engine , either when the program is first started, or when a new camera server – is registered using the administration interface.

The process begins by creating and connecting sockets for control and notification. The process then iterates through each camera – referenced by the camera server – , and creates and connects an image socket for each one. All steps that connect to a socket do so by initiating an asynchronous connection attempt.

The operating system will notify the recording engine The process begins at step where the processor creates a control socket. At the next step the processor connects the control socket between the recording engine and each of the camera servers – associated with the recording engine of the associated storage server Then at the next step , the processor creates the notice socket. At the next step , if the processor determines that there are any camera servers – that are not connected to the recording engine , then the process proceeds to step At step , the processor selects a next unconnected camera server – Then at the next step , the processor creates an image socket for the selected camera server – At the next step , the processor connects the image socket for the selected camera server – The process is preferably implemented as software resident on the hard disc drive and controlled in its execution by the processor of the storage server The process is executed by the operating system when a connection or disconnection occurs on a socket, when the socket is ready to accept data for writing to the socket, or when incoming data is ready to be read from the socket.

The process begins at the first step where the processor matches a particular socket with a camera server – At the next , if the socket is an image socket, then the process proceeds to step At step the processor matches the socket with a particular camera – Then at the next step , if the processor detects a connection event, then the socket is ready to begin processing new commands, and the process proceeds to step At step , the processor transmits a next command.

At step , if the event detected by the processor is a read event, then the process proceeds to step At step , the processor processes the socket read event. At step , if the event is a write event, then the process proceeds to step At step , the processor transmits content from an outgoing buffer configured within the memory At step , if the event is a disconnection event, then the process proceeds to At step , the processor processes the socket disconnection event.

The process is preferably implemented as software resident on the hard disc drive and being controlled in its execution by the processor The process is responsible for ensuring that any commands that need to be sent on a particular socket are sent.

Each time the process is executed, one command is queued for transmission on the socket. Thus, the response to each command to executes the process again to ensure that further commands are sent if necessary. The process executes a process for creating control commands, a process for creating a notification Command or a process for creating an image command depending on which socket the process is acting on.

If a command is successfully created i. If the socket is connected, the command is queued for transmission by asking the operating system to execute a process for processing a socket event when the socket is made ready for writing. If the socket is not connected, a request is issued to the operating system to connect the socket.

Once the socket is connected, the operating system will call a process for processing a socket event, which will eventually execute a process of sending a next command.

If a command was not created, this means that nothing needs to be executed using the current socket at this point in time. The socket is disconnected.

The process begins at the first step , where if the processor determines that the socket is a control socket, then the process proceeds to step At step , the processor creates a control command and the process proceeds to step At step , if the processor determines that the socket is a notification socket then the processor proceeds to step At step , the processor creates a notification command and the process proceeds to step At step , if the processor determines that the socket is an image socket, then the process proceeds to step At step , the processor creates an image command.

At step , if the processor determines that a command was created, then the process proceeds to step At step , if the processor determines that the socket is open, then the process proceeds to step At step , the processor queues the created command in memory for transmission. At step , the processor connects the created socket and the process concludes. The process is executed when there is data ready to be read from a socket. The process determines the type of socket, and executes a function dedicated to that type of socket.

The process begins at the first step where if the socket is a control socket then the process proceeds to step At step , the processor receives a control response. At step , if the socket is a notification socket, then the process proceeds to step At step , the processor receives the notification response. At step , if the socket is an image socket, then the process proceeds to step At step , the processor receives the image response and the process concludes.

The process is executed when the operating system informs the recording engine that a socket has been disconnected. If an outgoing data buffer configured within memory contains data, then there is a command that needs to be transmitted on the socket. In this instance, the processor informs the operating system that a connection is to be established. The operating system will later inform the recording engine of a successful connection.

If no data is present in the outgoing data buffer, the processor does not attempt to reconnect the socket. The socket can subsequently be connected when a new command needs to be sent. The process begins at the first step where if an outgoing buffer configured within memory is in use, then the process proceeds to step At step , the processor connects the socket and the process concludes. The process begins at the first step where if a connection identifier is open for a recording engine session with a particular camera server – , then the process proceeds to step Otherwise, the process proceeds to step where the processor creates an open camera server message and the process concludes.

At step , if the processor determines that camera server information needs to be retrieved, then the process proceeds to the next step At step , the processor creates a get camera server information message and the process concludes. In connection with step , the recording engine needs to be aware of various aspects of camera server configuration, such as names of attached cameras – , current pan, tilt and zoom positions. As a result, at step , the processor generates a HTTP message that requests such information.

In some implementations, various requests may need to be executed separately to obtain all of the required information. These various requests can be combined into a single step for purposes of simplicity. At step , if the pan, tilt and zoom settings for any camera – needs adjustment, then the process proceeds to step At step , if the processor determines that a preset list of a particular camera – is up-to-date, then the process proceeds to step At step , if camera control corresponding to the particular camera – is open, then the process proceeds to step At step , if the processor determines that control priority of a particular camera – is set to a normal priority level, then the process proceeds to step At step , the processor creates an operate camera message and the process concludes.

At step , the processor creates a priority message and the process concludes. At step , if camera control is open, then the process proceeds to step At step , the processor creates a release camera control message and the process concludes.

At step , the processor creates a get preset list message and the process concludes. At step , if the processor determines that control priority of a particular camera – is set to a below normal priority level, then the process proceeds to step Otherwise, the process proceeds to step as described above.

Camera servers – typically allow storage servers e. However, this may lead to two or more storage servers e. For example, if the storage servers A and B are set to record from different preset camera positions for the camera , the storage servers A and B may keep requesting control of the particular camera server to change the position of the camera To avoid this, before gaining control, the processor of the camera server A may set the priority level of the connection to the particular camera server to a below normal level of six 6.

As a result, the camera server A will not take the control right from the storage server B or the other storage servers , which run at a normal priority level of seven 7 , without control being granted to the storage server A. Once control has been granted to the storage server A, the processor of the storage server A restores the priority level of the storage server A to a normal level of seven 7 in order to prevent other storage servers from obtaining the control right.

At step , if the processor determines that a camera control request has been made, then the process proceeds to step At step , the processor creates a get camera control message and the process concludes. The process is executed when data is ready to be read on the control socket.

The process can be executed more than once for a single response, if not all the response data is available at the same time. The HTTP header is parsed to obtain the content length of the response, and, if any additional data is available, that data is received into a received data buffer. If not all the response data has been received for the given socket, the process will end.

Otherwise, the process for controlling a response is executed. The process begins at the first step , where if the processor determines that a complete HTTP header has been received, then the process proceeds to step Otherwise, the processor proceeds to step where HTTP header data is received.

At the next step the processor sets a remaining length attribute to the content length of the control response. Then at the next step , if the processor determines that additional data is available, then the process proceeds to step At step , the processor receives remaining data into a socket buffer configured within memory Then at the next step , the processor subtracts a length of the received part of the response from the remaining length of the response.

At the next step if the process determines that the entire response has been received, then the process proceeds to step At step , the processor processes the control response and the process concludes. The process processes the data received in response to a command sent on a control socket and performs certain actions based on the request that the received data is in response.

Certain requests do not require additional handling. The process begins at the first step , where if the processor detects an open camera server request, then the process proceeds to step At step , the processor extracts a connection identifier from the response body of the request.

Then at the next step , the processor sends a next notification command and the process proceeds to step The format of the media file is flexible. Each individual sample e. The following sections describe a preferred format for the media file , which is supported by the described arrangements.

However, a person skilled in the relevant art would appreciate that any suitable file format can be used for the media file Thus, changes in frame rate or late samples due to network lag can result in the playback rate inaccurately portraying the passage of time.

When the media file is first created, a set of headers is generated. Further, individual stream headers are generated to represent each track within the associated index file that points to the given media data file An empty sample list structure is also created to store each individual sample e. When the media file is closed, the headers are updated to reflect the number of samples that were stored within the media file When inserting a sample e. The stored sample has a header indicating which data stream the corresponding sample belongs to and the size of the sample.

The CNVD chunk contains additional information, which can be used to recover the media file and the associated index file In particular, the CNVD chunk contains information including the timestamp i. Each video sample of the sample data captured by the camera servers – is preferably configured as a separate JPEG file, which is inserted as a separate sample i.

The first copy of the text string is a null-terminated version of the text string. The second copy of the text string is placed immediately after the null character terminating the first copy. As an example, FIG. The event files corresponding to each camera server – are stored in a subdirectory of a drive configured within the storage device and selected to store event files for that camera server – The recording engine creates an event file corresponding to each video file pair i.

The matching of an event file with video file pair is useful in situations where video and event data is moved to archival storage, as the video and event data can be both copied or moved together and later can both be replaced together. Such matching ensures that the video data associated with specific events will be available if a user attempts to access information about the events, and that event data associated with specific video will be available if a user tries to access specific video.

Each event file can be named according to the following template:. The cameraid section of the template 3 is a character string that represents the unique identifier of a particular camera server – given to the particular camera – by the recording engine when the particular camera server – was first added to the system The value of the cameraid of the template section 3 is represented in hexadecimal format, with lower case letters representing the values A to F.

The creationdate section of the template 3 represents the date at which the event file was created, in UTC. The format of the creationdate is yyyymmdd. The creationtime section of the template 3 represents the time at which the event file was created, in UTC. The count section of the template 3 represents a sequential number corresponding to the event file, counting from an event file representing the beginning of a current recording session by the recording engine If more than one file is created with the same camera identifier, creation date and time i.

In addition to these event files, there is a system event file that contains events associated with the storage server but not necessarily associated with a particular camera server – These include events that indicate the startup or shutdown of the recording engine , or remote users logging in to the system on a viewer to retrieve viewer configuration and layout files, as will be explained in more detail below.

The event file is a text file, encoded in UTF8 format. Each line of the event file contains an event record describing a single event, and consists of several fields, delimited by a colon, as follows:. The datetime field of the event file line 4 describes the date and time at which the event was recorded. The datetime field is in the standard format that is generated by the ANSI function ctime , with an additional millisecond field inserted.

An example of such a time is: Tue Aug 19 The cameraid field describes the identifier of the camera sever – associated with the event. The type field of the event file line 4 comprises a four letter code defining the type of the event, followed by either a zero i. Certain events, such as a user logging on, do not have an OFF event.

Table 6 below defines all the available event types:. The priority field of the event file line 4 above contains a value between one i. The final field of the event file line 4 , description, may optionally contain additional colons as the end of the field is located using the newline character. The configuration management module reads and writes the storage server configuration file, and provides a mechanism to allow other applications to signal the recording engine when they modify the storage server configuration file.

The process is preferably implemented as software resident on the hard disk drive and controlled in its execution by the processor The process waits on a synchronization object that can be signaled by other applications. In this instance, the synchronisation object used is a named Event object. The process updates the General section of the configuration file. If camera servers – or cameras – were also added, modified or deleted, these changes are ignored.

The process begins at the first step where the processor creates a synchronisation object that can be accessed by other applications, setting the name and security attributes of the synchronisation object. The processor then enters a loop, waiting for the synchronisation object to be triggered by another application.

After the object is triggered at step , the Storage Server Configuration File is read at the next step At the next step , the process updates the internal variables that correspond to a General section of the Storage Server Configuration File, before returning to step where a new synchronisation event is sought. The video file management module of the recording engine manages all of the video files i. The video file management module is responsible for creating new video files as required, writing video sample data to the video files, and swapping to a standby file when the size or duration of an existing video file exceeds a set limit.

The video file management module comprises a video file management thread or subroutine. The video file management thread is responsible for creating and closing video files. The creation and closing operations can be relatively time consuming, and are thus performed by a separate thread i.

As described above, the recording engine attempts to ensure that an inactive standby file i. As soon as a first sample is written to a standby file, the file management thread is instructed to create a new inactive standby file, ready for the next swapover. A process for creating and closing video files in accordance with the video file management thread is described below with reference to FIG.

The process is preferably implemented as software resident on the hard disk drive of the storage server and being executed by the processor The process begins at step , where the processor waits until a message from another thread is detected.

The message can be one of three types as will be described below. Otherwise, the process proceeds to step At step , the processor enters a create video file set process , in order to create a media file and associated index file The process will be described in detail below in section 3. The process then returns to step Otherwise the process proceeds to step At step , the process enters a close video file set process , in order to close the media file and index file to create a completed file pair.

The process will be described in section 3. Otherwise the process returns to step The create video file set process will now be described with reference to the flow diagram of FIG. The process is called by the video file management thread. The purpose of the process is to create a set of standby files, each consisting of one media file and an associated index file The process is preferably implemented as software resident in the hard disk drive and being controlled in its execution by the processor The process begins at step where the processor acquires a synchronisation object that is shared with the access engine The synchronisation object is stored on the hard disk drive and is used to ensure that creation, opening, closure and deletion of media and index files are performed atomically with respect to each of the recording engine and the access engine In the next step , the processor generates a set of suitable standby file names, according to the specification in section 3.

At the next step , the processor increments the file counter for the given camera – in preparation for the next standby file that is to be generated. Then at the next step , the processor creates the event file in the hard disk drive The event file is created in accordance with a create event file process which will be described below with reference to FIG. At the next step , the processor creates the media file in the hard disk drive The media file is created at step in accordance with a process for creating a media file The process will be explained in detail below with reference to FIG.

The media file is created according to a media file path parameter and a media file format parameter, which are supplied by the processor The media file format parameter specifies the file format e. Then at the next step , the processor creates the index file in the hard disk drive The index file is generated based on an index file path parameter, which is supplied by the processor The index file is created at step in accordance with a process for creating an index file The process will be described in detail below with reference to FIG.

The process continues at the next step where the processor creates a first track e. The track created at step e. Also at step , the processor adds the video track to the index file configured within the hard disk drive Then at the next step , the processor creates a second track e. The second track is configured to reference text strings i. Also at step , the processor adds the created text track to the index file The process concludes at the next step where the synchronisation object obtained in step is released thereby allowing the access engine to operate on these files.

The create media file process of step will now be explained with reference to the flow diagram of FIG. The process begins at step where the processor creates the media file at a memory address, in the hard disk drive , as specified by the media file path parameter supplied by the processor at step Otherwise, the process concludes. At step , the processor creates an empty set of headers and an empty sample structure to store the samples detected by the processor Then at step , the created sample structures are written to the media file configured within the hard disk drive The index file structure can be stored either in the hard disk drive or in a temporary file configured within memory , since the index file structure will be appended to the end of the media file , the resultant length of which is currently unknown.

The process of creating an index file will now be explained with reference to the flow diagram of FIG. The process begins at step where the processor creates the index file at a memory address, in the hard disk drive , as specified by the index file path parameter supplied by the processor at step Then at the next step , the processor initialises a movie header atom for the index file The process concludes at the next step , where the processor initialises a user data atom for the index file The close video file pair process is called by the video file management thread.

The purpose of the process is to take an active open pair of files, close the files and rename the files to the appropriate file names. The process begins at step where the processor acquires the synchronisation object that is shared with the access engine As described above the synchronisation object is used to ensure that creation, opening, closure and deletion of the media file and index file forming the file pair are performed automatically with respect to each of the recording engine and the access engine At the next step , the processor determines whether the active file pair actually contain samples.

If no samples are present at step , then the file pair store no useful data and the process proceeds to step Otherwise, if samples are present within the standby file pair, a set of new file names is generated in accordance with the specification described above in section 3.

At the next step , the media file of the file pair is closed in accordance with a complete media file process , which will be described in detail below with reference to FIG. Then at the next step the media file is renamed.

The process continues at the next step where the media file reference in the associated index file i. At the next step , the active index file is closed in accordance with a complete index file process , which will be described in detail below with reference to FIG.

Then at the next step the index file is renamed. At step , the media file is closed in accordance with the complete media file process Since the media file contains no useful data, the media file is deleted from the hard disk drive at the next step The process continues at the next step , where the index file of the file pair is closed in accordance with the complete index file process Since the index file contains no useful data, the index file is deleted from the hard disk drive at the next step At step , the process closes the event file belonging to the current file set by calling the event file process The process concludes at the final step , where the synchronisation object obtained in step is released.

The process as executed at steps and will now be described with reference to the flow diagram of FIG. The process performs any cleanup tasks required to ensure that a media file is not corrupt and playable, and then closes the media file Otherwise, the process proceeds to step where any previously unwritten media file data in memory is flushed to the media file configured within the hard disk drive After step the process proceeds directly to step The headers are updated to reflect the exact number of samples stored in the media file The process concludes at the next step where the media file is closed to create a completed media file The complete index file process as executed at steps and will now be described with reference to the flow diagram of FIG.

The process begins at the first step where the processor sets the end time of the index file to an index end time value supplied by the processor The end time of the index file is set to cover the period from the first sample of the media file to the current time represented by a system clock of the storage server At the next step , the processor selects a first track of the index file Then at step if the processor determines that there are any remaining tracks of the index file to be processed, then the process proceeds to step At step , the end time for the selected track is set to the index end time supplied by the processor The next track of the index file is then selected by the processor at step and the process returns to step At step , the processor flushes any previously unwritten index file data in memory to the index file configured in the hard disk drive The process then concludes at the next step , where the index file is closed.

The process takes a sample representing a single video data sample from a frame buffer configured within memory , and ensures that the sample is written to the media file The associated index file is then updated to reference the newly written sample. The process begins at the first step where if the current rate of saving sample data is the same as the rate of that the storage server is acquiring sample data, then the process proceeds to step Otherwise the process proceeds to step where a capture time i.

That is, at step , the save rate counter is incremented by the difference between the time of the current sample i. At the next step if the value of the save rate counter exceeds the current save rate, then the process proceeds to step Otherwise the process concludes. At step , the processor resets the save rate counter by subtracting the current save rate from the value of the save rate counter. In the next step , the processor determines whether there is enough disk space to write the current frame to the disk associated with the camera – If there is not enough room, the process concludes.

Then at step the processor ensures that there is space to store the sample to be written to the media file in accordance with a check video file limits process which will be described in section 3.

At the next step , if there are no samples in the media file , then the process proceeds to step Otherwise, the process proceeds directly to step At step , the processor sends a request to the video file management thread, executing on the storage server , to create a new inactive standby file i.

At step , the sample is written to a video track of the media file The sample is written at step in accordance with a write frame to video track process which will be explained in detail below with reference to FIG.

The process concludes at the next step , where the properties of the sample are written in the form of a text string to a text track of the media file in accordance with a write frame properties to text track process The process of checking video file limits will now be described reference to FIG. The process is preferably implemented as software resident on the hard disk drive and being controlled in its execution by the processor The process determines whether an active file pair for a camera server – i.

If no more samples can be accommodated, the processor switches to an inactive standby file for that camera server – , and issues a request to close the previous active media file pair. The process begins at the first step , where the processor checks the index file corresponding to the media file of the active file pair. If the index file has free entries in the corresponding sample table structure to accommodate a new sample then the process proceeds to step At step , if the capacity of the currently active media file is less than a predetermined limit e.

At step , if the processor determines that the system is configured by a user to limit media file sizes then the process proceeds to step At step , if the processor determines that the currently active media file is smaller than the user specified limit then the process proceeds to step At step , if the processor determines that the system is configured by the user to limit media file duration, then the process proceeds to step At step , if the processor determines that the duration of the currently active media file is smaller than the user specified limit then the process concludes.

At step , the processor initiates a file swapover by setting the currently active media file to an inactive standby media file.

Then at the next step , the processor sends a CLOSE FILE request to the video file management thread in order to close the media file and the process concludes.

The process of writing a sample to a video track as executed at step will now be explained with reference to FIG. At the first step , if there are already samples present in the video track of the active media file then the process proceeds to step At step , if the format and resolution of the new samples are not the same as the current sample s in the media file then the process proceeds to step Otherwise the process proceeds directly to step At step , the processor reconfigures a video track of the media file to accept a sample using the correct data format and resolution.

The process concludes at step where the sample is added to the video track in accordance with a process for adding a sample to a track, which will be described in detail below with reference to FIG. The process for writing sample properties to a text track will now be described with reference to FIG.

The process ensures that time and event information is written to the text track of a media file configured within the hard disk drive of the storage server The process begins at the first step where the processor accesses a timestamp indicating the acquisition time associated with a sample to be written, and generates a string of the format that is generated by the ANSI function ctime , with an additional millisecond field inserted. At the next step , if there are any events for the associated camera server – that have not been saved to the current active media file then the process proceeds to step At step , the processor pulls off a next event from an event queue configured in memory , in turn, and the description of the event is appended to the string generated at step The process concludes at step , where the text string is added to the text track in accordance with the process The process for adding a sample to a track as executed at steps and will now be explained with reference to FIG.

The process takes a video sample or a text sample i. The process begins at step where the sample is added to the active media file The sample is added at step in accordance with an add sample to media file process , which will be described in detail below with reference to FIG. At the next step , the sample offset indicated by a sample offset parameter and sample size parameter of the sample to be added, is written to the sample table structure i.

At the next step , if the sample is the first sample to be associated with the track then the process continues at the next step At step , the processor sets the starting time of the track to the timestamp of the current sample using the sample recording time parameter i. Otherwise, the processor determines the duration of the previous sample by subtracting the timestamp i.

Then at the next step , a track duration value associated with the relevant track i. At the next step , if the track duration causes the end of the track to exceed the index duration, then the process proceeds to step Otherwise, the process of step concludes.

At step , the index duration is updated to accommodate the track i. The process of adding a sample to a media file will now be explained with reference to FIG. Otherwise, the process proceeds directly to step , where the data of the detected sample is appended to a media file configured within the hard disk drive At step , if the sample to be added to the media file is empty i. At step , if the track being written is a text track, then the process proceeds to step At the next step , the original data of the detected sample is appended as a data string to the null terminated copy of the string.

Then at step , the processor creates a custom data chunk containing a timestamp based on the detected sample acquisition time and writes the custom data chunk to the media file configured within the hard disk drive At step , the processor writes a data chunk based on the detected sample to the media file As described above, the recording engine includes an event management module for managing all of the event files stored on the drives configured within the storage device of the storage server The event management module is responsible for creating new event files as required, writing data to these new event files, and swapping to further new event files when the size or duration of an existing event file exceeds a predetermined limit.

The process creates a standby media file The process begins at the first step where the processor uses functionality provided by the operating system to create a file using the path generated at step of the create file pair process The process begins at the first step , where an event record is created according to the format described above.

At the next step , the event record is written to a currently open event file configured within the storage device or a currently open system event file. The process begins at the first step where the processor generates a file set stopping event using the generate event process.

The process concludes at the next step where the event file is closed. As described above, the camera server communications module manages communications between the recording engine and the camera servers – that the recording engine controls. The communications module is responsible for ensuring that correct settings are sent to a particular camera server – , and receiving samples and event notifications as required.

As such, the camera server communications module is responsible for maintaining connections to the camera servers – , and the exchange of all data between the recording engine and each of the camera servers – The recording engine connects to the camera server – and issues an HTTP request, requesting a specific resource depending on the operation that is to be performed. Most requests supported by the camera servers – prompt an immediate reply. An exception to this is a get notice command, which leaves the connection open after a request until the camera server – has a new notification that can be sent to the recording engine A separate image socket is used to receive video sample data from each camera – that is connected to the particular camera server – Once started, images are sent from the camera server – to the recording engine in a stream which can only be terminated by closing the image socket.

The camera server – maintains a set of connection identifiers which are used to represent active client sessions, and which are to be supplied together with certain requests. The control and notification sockets share a single connection identifier, while the image socket does not use an explicitly created one. Socket communication is performed asynchronously. That is, the recording engine notifies the operating system that the recording engine is attempting to perform an operation to connect or write to a socket, and the operating system subsequently informs the recording engine when the connection is complete, or when the operating system is ready to accept data to write.

Additionally, the recording engine registers itself to receive from the operating system read notifications whenever new incoming data is available on the socket, and disconnection notifications, if the socket is disconnected by the other end, or a timeout occurs. The process is executed whenever a camera server – is initialized in the recording engine , either when the program is first started, or when a new camera server – is registered using the administration interface.

The process begins by creating and connecting sockets for control and notification. The process then iterates through each camera – referenced by the camera server – , and creates and connects an image socket for each one. All steps that connect to a socket do so by initiating an asynchronous connection attempt. The operating system will notify the recording engine The process begins at step where the processor creates a control socket.

At the next step the processor connects the control socket between the recording engine and each of the camera servers – associated with the recording engine of the associated storage server Then at the next step , the processor creates the notice socket.

At the next step , if the processor determines that there are any camera servers – that are not connected to the recording engine , then the process proceeds to step At step , the processor selects a next unconnected camera server – Then at the next step , the processor creates an image socket for the selected camera server – At the next step , the processor connects the image socket for the selected camera server – The process is preferably implemented as software resident on the hard disc drive and controlled in its execution by the processor of the storage server The process is executed by the operating system when a connection or disconnection occurs on a socket, when the socket is ready to accept data for writing to the socket, or when incoming data is ready to be read from the socket.

The process begins at the first step where the processor matches a particular socket with a camera server – At the next , if the socket is an image socket, then the process proceeds to step At step the processor matches the socket with a particular camera – Then at the next step , if the processor detects a connection event, then the socket is ready to begin processing new commands, and the process proceeds to step At step , the processor transmits a next command. At step , if the event detected by the processor is a read event, then the process proceeds to step At step , the processor processes the socket read event.

At step , if the event is a write event, then the process proceeds to step At step , the processor transmits content from an outgoing buffer configured within the memory At step , if the event is a disconnection event, then the process proceeds to At step , the processor processes the socket disconnection event.

The process is preferably implemented as software resident on the hard disc drive and being controlled in its execution by the processor The process is responsible for ensuring that any commands that need to be sent on a particular socket are sent. Each time the process is executed, one command is queued for transmission on the socket.

Thus, the response to each command to executes the process again to ensure that further commands are sent if necessary. The process executes a process for creating control commands, a process for creating a notification Command or a process for creating an image command depending on which socket the process is acting on.

If a command is successfully created i. If the socket is connected, the command is queued for transmission by asking the operating system to execute a process for processing a socket event when the socket is made ready for writing. If the socket is not connected, a request is issued to the operating system to connect the socket. Once the socket is connected, the operating system will call a process for processing a socket event, which will eventually execute a process of sending a next command.

If a command was not created, this means that nothing needs to be executed using the current socket at this point in time. The socket is disconnected. The process begins at the first step , where if the processor determines that the socket is a control socket, then the process proceeds to step At step , the processor creates a control command and the process proceeds to step At step , if the processor determines that the socket is a notification socket then the processor proceeds to step At step , the processor creates a notification command and the process proceeds to step At step , if the processor determines that the socket is an image socket, then the process proceeds to step At step , the processor creates an image command.

At step , if the processor determines that a command was created, then the process proceeds to step At step , if the processor determines that the socket is open, then the process proceeds to step At step , the processor queues the created command in memory for transmission.

At step , the processor connects the created socket and the process concludes. The process is executed when there is data ready to be read from a socket. The process determines the type of socket, and executes a function dedicated to that type of socket. The process begins at the first step where if the socket is a control socket then the process proceeds to step At step , the processor receives a control response. At step , if the socket is a notification socket, then the process proceeds to step At step , the processor receives the notification response.

At step , if the socket is an image socket, then the process proceeds to step At step , the processor receives the image response and the process concludes. The process is executed when the operating system informs the recording engine that a socket has been disconnected. If an outgoing data buffer configured within memory contains data, then there is a command that needs to be transmitted on the socket. In this instance, the processor informs the operating system that a connection is to be established.

The operating system will later inform the recording engine of a successful connection. If no data is present in the outgoing data buffer, the processor does not attempt to reconnect the socket. The socket can subsequently be connected when a new command needs to be sent. The process begins at the first step where if an outgoing buffer configured within memory is in use, then the process proceeds to step At step , the processor connects the socket and the process concludes.

The process begins at the first step where if a connection identifier is open for a recording engine session with a particular camera server – , then the process proceeds to step Otherwise, the process proceeds to step where the processor creates an open camera server message and the process concludes. At step , if the processor determines that camera server information needs to be retrieved, then the process proceeds to the next step At step , the processor creates a get camera server information message and the process concludes.

In connection with step , the recording engine needs to be aware of various aspects of camera server configuration, such as names of attached cameras – , current pan, tilt and zoom positions.

As a result, at step , the processor generates a HTTP message that requests such information. In some implementations, various requests may need to be executed separately to obtain all of the required information.

These various requests can be combined into a single step for purposes of simplicity. At step , if the pan, tilt and zoom settings for any camera – needs adjustment, then the process proceeds to step At step , if the processor determines that a preset list of a particular camera – is up-to-date, then the process proceeds to step At step , if camera control corresponding to the particular camera – is open, then the process proceeds to step At step , if the processor determines that control priority of a particular camera – is set to a normal priority level, then the process proceeds to step At step , the processor creates an operate camera message and the process concludes.

At step , the processor creates a priority message and the process concludes. At step , if camera control is open, then the process proceeds to step At step , the processor creates a release camera control message and the process concludes. At step , the processor creates a get preset list message and the process concludes. At step , if the processor determines that control priority of a particular camera – is set to a below normal priority level, then the process proceeds to step Otherwise, the process proceeds to step as described above.

Camera servers – typically allow storage servers e. However, this may lead to two or more storage servers e. For example, if the storage servers A and B are set to record from different preset camera positions for the camera , the storage servers A and B may keep requesting control of the particular camera server to change the position of the camera To avoid this, before gaining control, the processor of the camera server A may set the priority level of the connection to the particular camera server to a below normal level of six 6.

As a result, the camera server A will not take the control right from the storage server B or the other storage servers , which run at a normal priority level of seven 7 , without control being granted to the storage server A. Once control has been granted to the storage server A, the processor of the storage server A restores the priority level of the storage server A to a normal level of seven 7 in order to prevent other storage servers from obtaining the control right.

At step , if the processor determines that a camera control request has been made, then the process proceeds to step At step , the processor creates a get camera control message and the process concludes. The process is executed when data is ready to be read on the control socket.

The process can be executed more than once for a single response, if not all the response data is available at the same time. The HTTP header is parsed to obtain the content length of the response, and, if any additional data is available, that data is received into a received data buffer.

If not all the response data has been received for the given socket, the process will end. Otherwise, the process for controlling a response is executed. The process begins at the first step , where if the processor determines that a complete HTTP header has been received, then the process proceeds to step Otherwise, the processor proceeds to step where HTTP header data is received. At the next step the processor sets a remaining length attribute to the content length of the control response.

Then at the next step , if the processor determines that additional data is available, then the process proceeds to step At step , the processor receives remaining data into a socket buffer configured within memory Then at the next step , the processor subtracts a length of the received part of the response from the remaining length of the response. At the next step if the process determines that the entire response has been received, then the process proceeds to step At step , the processor processes the control response and the process concludes.

The process processes the data received in response to a command sent on a control socket and performs certain actions based on the request that the received data is in response. Certain requests do not require additional handling. The process begins at the first step , where if the processor detects an open camera server request, then the process proceeds to step At step , the processor extracts a connection identifier from the response body of the request. Then at the next step , the processor sends a next notification command and the process proceeds to step At step , if the processor determines that the request was a get camera server information request, then the process proceeds to step If the request was a get camera server information request, then the response body of the request is parsed to extract information about the camera server – that issued the request, including the current pan, tilt, zoom positions, camera names, etc.

At step , the processor extracts camera server properties from the response body of the request. Then the process proceeds to step At step , if the processor determines that the request was a get preset list request, then the process proceeds to step If the request was a get preset list request, then the response body is parsed to extract information about the preset positions stored in the camera server.

Accordingly, at step , the processor extracts preset settings from the response body of the request and the process proceeds to step At step , if the processor determines that the request was an operate camera request, then the process proceeds to step At step , the processor sets a pan, tilt, zoom PTZ correct flag.

At step , if the processor determines that the current camera pan, tilt, zoom settings are different to desired camera pan, tilt, zoom settings, then the process proceeds to step At step , the processor sends a next control command. The process begins at the first step where if the processor determines that a connection identifier is currently open, then the process proceeds to step If a connection identifier is not currently open, a command is sent on the control socket to establish a new one.

The command is sent by executing a send next command process with the control socket as an argument. Since no other commands may be sent on the notification socket until a connection identifier is established. As will be described in detail below, if the current sensor status is not known, an external IO status message is created to determine the current sensor status. Otherwise, a Get Notice message is created, which allows the recording engine to listen for sensor and motion detection events and camera control events.

At step , if the processor determines that the status of a current sensor is known, then the process proceeds to step At step , the processor creates a get notice message.

The get notice message allows the recording engine to listen for sensor and motion detection events and camera control events. At step , the processor sends a next control command and the process concludes. The process begins at the first step where if the processor determines that a complete HTTP header has been received, then the process proceeds to step At step , the processor receives the HTTP header data.

Then at the next step , the processor sets a remaining length to content length. At the next step , if the processor determines that additional data is available, then the process proceeds to step At step , if the processor receives remaining data into a socket buffer configured within memory , then at the next step , the processor subtracts a received length from the remaining length of the notification message.

At the next step , if the entire notification response has been received, then the process proceeds to step At step , the processor processes the notification response.

If the request is determined to be an external input output IO Status request, then the processor determines whether a current sensor and motion detection status reported by the camera – matches those known by the recording engine The camera settings are updated according to a triggered event by executing a process for updating camera settings as will be described in detail below.

If the request is determined to be a get notice request, then the processor determines from the response what kind of notification this is. If the notification indicates a change in sensor or motion detection status, a process for generating an event is called, and the camera settings are updated according to the triggered event by calling an update camera settings process.

If the notification indicates that a camera was moved by another client, a process for updating camera settings is executed to initiate to move the camera back to the position desired by the recording engine , if necessary.

If the notification indicates that camera control was granted, the next command is sent on the control socket using a process for sending a next command. This would typically cause an operate camera command to be generated to move the camera to a new position. If the notification indicates that camera control was denied, the next command is sent on the control socket using the process for sending a next command.

This would typically cause a new request for camera control to be issued, resulting in a loop that exits when camera control is finally granted. The process begins at the first step where the processor determines if there is an external IO status request. If there is no request the No option of step then the process proceeds to step If, however, there is an external IO status request the Yes option of step , then step checks whether a sensor or motion state is different.

If there is no difference in states the No option of step then the process proceeds to step , in which the next notification command is sent. Once the event has been generated, in step the camera settings are updated. Then, in step , the next notification command is sent and the process concludes. Step , which is executed when there is no external IO status request, checks whether there is a get notice request.

If there is a get notice request the Yes option of step then step determines whether there is a sensor or motion status change. If there is a status change the Yes option of step , then the process proceeds to step , which has been described above. If there is no status change the No option of step then in step the processor checks whether the camera – is controlled by another storage server If this is the case the Yes option of step then the process proceeds to step in which, as previously described, the camera settings are updated.

If the camera – is not controlled by another storage server the No option of step then in step the processor checks whether a camera control request has been granted. If a request has been granted the Yes option of step , then in step the next control command is sent. The process then proceeds to step , in which the next notification command is sent. If no camera control request has been granted the No option of step , then a check is performed in step as to whether a camera control request has been denied.

If no denial has occurred the No option of step then the process proceeds to step , in which the next notification command is sent. If, however, a camera control request has been denied the Yes option of step then control flow proceeds to step , in which the next control command is sent. The process is executed when an image socket has been connected. If the acquisition frame rate currently required for a given camera is greater than zero, a get image message is created.

Otherwise, the recording engine does not need to receive image data using the socket. The process begins at the first step where if the processor determines that a required frame rate is greater than zero, then the process concludes.

Otherwise, the processor creates a get image message and the process concludes. The process is executed each time there is sample data available to be received by the storage server on the image socket of a camera server – The process begins at step where if the processor determines that a complete HTTP header has been received from one of the camera servers – , then the process proceeds to step At step , if the processor determines that a complete multipart header has been received, then the process proceeds to step At step , the processor receives HTTP header data.

At the next step , if the processor determines that no more additional sample data is available then the process concludes. Otherwise, the process proceeds to step , where the processor receives multipart header data. Then at the next step , if the processor determines that a frame buffer region has been reserved within the memory of the storage server , then the process proceeds directly to step Otherwise, the process proceeds to step , where if the processor determines that the multipart header contains a content-length then the process proceeds to step At step , the processor sets a remaining length parameter to content-length.

Then at step , the processor reserves a frame buffer region within memory according to the remaining length of sample data to be received. The process continues at the next step , where if additional sample data is available, then the process proceeds to step At step , the processor receives the remaining sample data into the frame buffer allocated within memory or hard disk Then at the next step , the processor subtracts the received sample data length from the length of the remaining sample data.

The process continues at the next step , where if the processor determines that all of the sample data has been received from one of the camera servers – then the process proceeds to step Otherwise, the processor concludes.

 

USA1 – Streaming non-continuous video data – Google Patents.

 
Each video sample of the sample data captured by the camera servers – is preferably configured as a separate JPEG file, which is inserted as a separate sample i. If sensor based recording i.

 
 

 
 
AIX and Physical Location Code Reference Table Model H If more than eight digits are displayed in the operator panel, use only the first. BCN Fixed an issue that caused an “error in the main process” message when trying to open BA connected v from a computer with no.

Leave a Reply

Your email address will not be published. Required fields are marked *