Welcome to this article about Miris station recording profiles.
They are the nerve center of your recorder, as they define the parameters and functions available during recording and publishing.
First, we'll take a look at how to set up three classic user profiles: test, camera, camera and computer. Next, we'll go into detail on each section of the recording profiles, i.e. video/audio sources, rendering parameters, Nudgis functionalities, streams and finally advanced options. This second part will be dedicated to fleet administrators who want to master all recording profile parameters.
TABLE OF CONTENT
2/ ADVANCED PROFILE MANAGEMENT
2A/ VIDEO SOURCES
2B/ AUDIO SOURCES
2D/ NUDGIS FEATURES
2E/ STREAMS
2F/ ADVANCED OPTIONS
Recording profile
They are available via the Settings > Configuration > Recording Profiles menu.
By default, if you didn't create a profile during the initial installation phase, you'll have a profile called "all-inputs". This profile will attempt to detect the devices connected to your station, so for optimum performance, we recommend that you create recording profiles adapted to your use case.
You can have as many recording profiles as your use case.
You can rename the profile , configure it , restore a previously saved version , duplicate or delete a profile by clicking on the desired icon. Finally, you can create a new recording profile using the button . For example, to create a test profile with virtual sources, useful if you haven't yet physically connected your camera or computer to the station. The first thing to do is to give the profile a name. It will be used to launch your recording. Here, we'll call it "test", as it's a test profile.
Then click on the button to set up your "test" profile.
For ease of use, we'll define our video and audio sources, as well as our main rendering resolution.
First, we'll add our video source:
Please define a name for the source. You can use the same name for your RTSP, NDI or SRT audio source, if your video source also contains audio, to avoid having to redefine the audio source type. Next, select the type of video source, in our case "Test signal", and lastly the choice for the test signal test pattern: smtpe, red, blue, green, ball.
Perform the same operation for the audio source and choose the test sound type from ticks, sinusoidal tone or white noise, for example.
Finally, you can choose your main rendering resolution from 2160p, 1440p, 1080p (and custom). Bear in mind that the higher the resolution, the larger the recording file size. For standard recording with two sources, you can select 1080p. If you have more than two sources, you can select a resolution starting from 1440p.
Finally, click on the "Save" button to validate your changes. You'll get a new test profile.
This test profile is available regardless of the devices connected to perform your tests! Since it only works with virtual sources.
Now we'll look at setting up a recording profile with physical or network sources, which is frequently used, i.e. a camera-only profile, or the camera and your computer.
Since we're going to use physical or network sources, if one of the sources is not available for preview, you'll get an error like :
This means that the "cam" video source in the "camera" profile is not detected by the Miris station.
So be sure to check that your sources are physically connected to the right place, or available on your network, before you start recording.
The camera profile will have one video and one audio source. It will record the speaker with a microphone. So we're going to create a profile that we'll call camera, with a video source and an audio source, as we saw earlier when creating the test profile.
We're going to add a "cam" video source, which in this case will be a network camera with an RTSP stream of type "rtsp://IP/media/video1" for a Sony camera. If required, you can enter the user/password if your RTSP stream is protected.
In the second part, we'll explain how to retrieve the RTSP stream from your camera. If you want to retrieve a stream directly from your network, you'll need to use an NDI video source, for example.
You can activate the "Perform AutoCam speaker tracking on this input" option. This will only work if it is enabled on your Nudgis server and allows you to track the speaker. Please note that this feature will run on your Nudgis Worker server, not your Miris station.
For our audio source, we'll use the microphone connected to the camera, so the audio output will be of the "Source RTSP" type, also with the "cam" name of our previous video source.
Once you have validated your changes, click on the "Save" button. You'll get a preview of the "camera" profile:
For the camera and computer profile, this will contain two video sources and one audio source, making a total of three sources. Here, the only difference is that we'll be adding a video source that will capture the computer screen in addition to the speaker. This is why we invite you to duplicate the "camera" profile by clicking on the button :
And rename the "camera_copy" profile to "camera-pc".
Please edit your new "camera-pc" profile by clicking on the dedicated button . And add a new "data" video source, here HDMI, so that we can activate the "Detect slides on this input" option. This will activate a slide search if you're using a presentation tool.
After confirming your changes by clicking on the "Save" button. The "camera-pc" profile preview will be displayed:
We're now going to take a closer look at the various settings and options of the recording profile for advanced use by Miris administrators.
Advanced profile management
Miris stations support both physical and network video sources:
Up to two video sources for Miris Box Mini.
Up to six video sources for Miris Box Plus.
Up to six 1080p networked video sources for Miris Netcapture.
Video sources
The stations support 5 types of video sources:
There are two types of physical input: USB (USB 3.0 UVC-compatible device) and HDMI (up to 1080p resolution at 60 fps). A special feature of the Miris Box Mini is HDCP support, which is not the case with its sister models.
To do this, choose "HDMI or USB input" for the video source type, then select your source (it should be detected automatically) in "Selected physical input".
If required, you can define the resolution and a customized time frame.
Check the "Detect slides on this input" box if the source is a data source with a presentation (when exporting, Nudgis extracts the slides and keywords from this source).
Here I have a 1080p HDMI input that I've selected and for which I've activated the slide detection option:
You can also retrieve four types of network sources :
RTSP: If you're using an IP camera, you'll need to select this source. Please note that we only support H.264/AAC stream. To retrieve the camera stream, please connect to your camera or use the software provided by the manufacturer. Otherwise, please use the ONVIF software to retrieve the camera stream link. For example, the link for a Sony camera would be "rtsp://IP/media/video1". If required, you can enter the user/password if your RTSP stream is protected.
NDI: If you need to retrieve a stream from your network, we support NDI (NDIv5 and NDI|HX v1, v2 & v3 but not the H.265 codec). All you need to do is select your NDI source from the drop-down menu. Please note that with NDI HX v1, you need to remove the alias for the CAM network (192.168.12.10/24), otherwise the NDI stream will be "discovered", but will not be playable.
SRT: Since Miris 2.6.0, we've added support for the SRT (Secure Reliable Transport) protocol for video sources, enabling stream encryption and use on less stable networks. This video broadcast protocol is based on UDP and uses port 4343 by default. It's best to set your camera to "Listener" mode, because if you set it to "Caller" mode, you'll have to start the video stream on the camera. Finally, you'll need to set the Miris station to "Caller" mode, then add the IP address of the remote SRT source (your camera's).
RTP: You can retrieve an RTP video stream by defining a UDP port (default 10000).
Finally, a virtual "Test Signal" source, which we discussed when creating our test profile. This concludes our presentation of the various capture possibilities for your video sources. We will now continue with the capture of your audio sources.
Audio sources
Miris stations support both physical and network audio sources. They all support audio capture from an RTSP network stream, NDI, Dante (via a Dante AVIO USB adapter), or an HDMI audio source. However, each has its own particularity:
The Miris Box Mini can retrieve audio via a 3.5 mm jack.
In addition to physical audio inputs, the Miris Box Plus can also accept XLR or RCA inputs.
Miris Netcapture allows you to capture only network audio streams.
The configuration of audio sources is based on the same principle as for video sources. However, for RTSP, SRT and NDI network audio sources, the name field must correspond to the associated video source. In other words, if your video source is called "cam", you'll also need to label your audio source "cam". For physical audio sources, you'll need to select the desired source, e.g. on Miris Box Mini, you can select
And on a Miris Box Plus, you'll have the choice of
If you have a Dante AVIO USB adapter, it will be detected as follows:
We continue with the next section on rendering settings.
Rendering settings
As we saw in the first part with the creation of recording profiles, you can select a rendering resolution for your recording.
Please note that the higher the resolution, the larger the file size:
2160p (10-20 Mbits/s) or 5 to 10 GB per hour.
1440p (5-10 Mbits/s) or 2.5 to 5 GB per hour.
1080p (4-8 Mbits/s) or 2 to 4 GB per hour.
If required, you can also change the encoding rate and the main frame rate for video recording rendering. By default, the encoding rate is 5 Mbps and it is not possible to go beyond 40 Mbps. For the frame rate, you can choose between 25 fps and 30 fps; the default is 25 fps (Europe).
You can also have the recording rendered on an HDMI screen for monitoring, by activating the "Enable video preview output" option. This can be useful for showing on a local monitor to the presenter, who can check if the framing is correct or if the recording is in progress. Please note, however, that this will only work if the monitor is plugged in before the station is started.
If the "Enable video preview output" option is active, a new option "Enable OSD on preview output" will appear, displaying the recording status in overlay. Finally, via the "Audio output for preview" input, you can choose the output to which to broadcast the recording sound.
Nudgis features
We're now going to look at a few parameters linked to your Nudgis server.
Firstly, the ability to define a default location that will allow you to have the location name in the video title instead of the station name in ubi-box-mac-address. In this way, you'll be able to find your videos according to where they were recorded. For example, we've set the default location to "Room A":
You can find this information in the video metadata in the advanced video settings on your Nudgis server for the "Creation location" entry.
Next, you can activate automatic publication to a channel on your Nudgis.
By selecting the Nudgis server, then the publication channel.
The "Automatically remove media after a successful automatic export" option deletes the recording from the station if the export has been successfully completed.
If you have created several automatic publications, they will be performed one after the other.
The next Nudgis feature concerns live broadcast configuration. It enables you to activate the "Start Live" button on the recording page. We're using the same principle as for configuring the automatic publishing channel, but this time for your live broadcast page.
You will then have three options for live streaming to Nudgis:
- Create Nudgis live page in the same channel as VOD: If enabled, and if a channel is defined in the recording metadata (e.g. provided via the "Course Id" of a scheduled recording), the live page will be created in the same channel as that used for VOD publication. Note that if Miris Manager or the interface passes a value for the "live_oid" parameter, it will take priority.
- Transfer social annotations from the live page to VOD in the case of automatic publication. All annotations on the live page will be transferred to VOD when sent to the server (automatic publication only).
- Delete all social annotations on the live page when the live broadcast stops.
Streams
We'll now move on to the Streams section, where you can set up the streams sent by the station.
By default, there are 3 streams: one corresponding to the main rendering resolution, followed by a 1080p and 720p stream. It groups together preview streams (HLS) and live streaming to a server (RTMP). This is an advanced topic that can cause unexpected problems: by default, streams are configured automatically. We advise you not to change the parameters (stream width, height, frame rate, video encoding rate, audio encoding rate), and to contact the UbiCast support team for any modifications beforehand. Other parameters, however, can be modified. The "Included in HLS stream" option allows you to enable or disable the stream in the preview.
And the "Enable RTMP stream" option for sending your live broadcast to an RTMP server can be modified.
You'll need to concatenate all your RTMP parameters. For example, to broadcast live to YouTube, you'll need to enter the RTMP URL, knowing that you'll have these three parameters:
RTMP server: rtmp://a.rtmp.youtube.com
RTMP app: live2
RTMP feed: Your API key
At least one audio source must be configured for live-streaming to an RTMP server to work.
Please note that it is not possible to broadcast live to Nudgis and an RTMP server at the same time. In addition, there must be at least one active stream for recording and previewing to function correctly.
Advanced options
This last section, "advanced options", contains advanced configuration options that allow you to make more precise adjustments, and should be handled with care.
First of all, you can change the periodicity of recording actions.
Start live broadcast automatically: By default, the option is set to "Never", so you'll have one button to start recording and another to start a live broadcast. The other available option is "When recording", which lets you launch a recording and a live broadcast with a single button . Finally, if required, you can select the "Always (even during preview)" option, in which case the live broadcast will be launched automatically after selecting your profile.
Stop the preview automatically : The default behavior "Never" is to keep previewing as long as a user remains logged in. You can force the preview to stop after each recording by selecting the "After stopping recording" option, to force the HLS URL to change more often, to reduce power consumption or to adjust the user experience.
Unpause automatically: The default setting is to deactivate the automatic pause "Always (when starting and stopping recording)". However, it is possible to activate this option only "When starting recording", or to deactivate it completely with the "Never" option.
Maximum recording duration : You can change the initial maximum recording time of 4 hours, with zero representing unlimited recording time.
The following section lets you modify the rendering appearance of your recording on your Nudgis server.
You can add a background image by specifying an absolute path to a jpg or png file such as "/home/ubicast/mediacoder/backgrounds/background.png". You'll need to transfer this image with an SFTP client, using the "ubicast" account to connect.
Then, if you wish to define a particular layout for your sources, please uncheck the "Automatic layout" option, although we recommend using the Automatic layout option.
If unchecked, the "Publish as Dynamic RichMedia on Nudgis" option allows you to deactivate Dynamic RichMedia publishing, so that your site will be in RichMedia HD format.
If you've disabled the "Automatic layout" option, you can apply a Dynamic RichMedia layout other than 50/50 in JSON format, which you can retrieve from your Nudgis server in your video's player settings.
The next section allows you to modify the Gstreamer multimedia environment log level parameters and change the encoding engine. Please contact UbiCast support before making any changes to these settings.
You can also "Ignore capture limits", but we strongly advise you do not activate this option, as the system will record beyond its specifications. Use this option at your own risk.
If you don't want to "Secure the HLS stream by randomizing the URLs every time the preview or capture starts", please uncheck this option. In this case, anyone with the HLS link will be able to view the recording.
Finally, you'll be able to execute third-party actions via scripts, either according to "Start/stop preview or recording" events. Sample scripts are available in the "/usr/share/mediacoder/examples" directory. For example, at preview startup, you can launch a script by specifying its full path to call up a camera preset, in this case Canon :
Or in the form of customized action buttons.
For example, for camera preselection calls, you can use the "pycamctl" script, available via the path "/usr/share/mediacoder/examples/pycamctl". It is currently only compatible with Sony and Panasonic cameras.
To call up preset 1 of a Sony SRG-300SE camera, use this command:
/usr/share/mediacoder/examples/pycamctl --model Sony-SRG-300SE --ip 192.168.1.2 --user admin --password admin1234 preset_call 1
You'll then get a "Wide shot" button in the recording interface, allowing you to call up the desired camera preset:
For further information or to contribute to new features, please consult our code directory at https://github.com/UbiCastTeam/pycamctl.
This article has given you a comprehensive overview of how to use and configure recording profiles. You can now start recording!
Comments
0 comments
Please sign in to leave a comment.