NHK is holding the "Giken Open 2021 Online" until June 30 (Wednesday)."Kenken Open" is an event with many fans as a place for the latest technology announcement held every year at the NHK Broadcasting Technology Research Institute in Kinuta, Tokyo.This year, due to the catastrophe, it was announced only online, not the actual exhibition and events.There are many interesting things for audio visual fans, and even online release is worthwhile.
However, online announcements are hard to ask questions about what they care about.This time, following Mr. Asakura's intention, "I want to hear directly about how the person in charge is studying the technology," she asked NHK to respond.Here are some of the themes that Mr. Asakura focused on and interviews about it.(Editorial department)
High -definition light playback type 3D video system
It is a naked eye 3D that was first released in 2018 under the name of "Active Navision (meaning of" rays "in Latin)".At that time, it was SD resolution, but three years later, the resolution was improved.The original image is a horizontal 960 x vertical 540 pixels, but the image with a pixel half -shifted diagonally in a diagonal direction is further processed in a time -division, and the playback is a total of six projectors, 8K x 2, 4K x 4, and the HD resolution.It has improved considerably.The size of the screen is 21.5 inches.
When I actually checked it on site, the image density was still coarse, but the depth and the three -dimensional effect of the cheeks of the person appeared well.The horizontal view is 30 degrees, because 384 viewpoints are generated from 24 cameras.The goal is to spread practical use in the 2040s.(Asakura)
At the release of Asakura Giken, 3D three -dimensional images have been exhibited for many years.What is this year's exhibition?
Omura This is a "light playback type 3D video display device", which is a three -dimensional display that can be viewed by multiple people.This is the first exhibition of this display because it was an integral exhibition in the previous 2019 Giken release.
Asakura's SD quality system was exhibited in 2018, but this time the high -resolution is the best evolution point?
Omura Yes, the highest point in this technical side is that the resolution is increasing using the shift method.To put it in detail, we use a method of multiply the rays in time using the time -division technology.There are two types of time division technology, one is used to increase the resolution, and the other is to increase the density of light rays.
How much frame rate is Asakura split?
Since the Omura projector is 120p, it is divided into 4 parts and reproduced as a video equivalent to 30p.
Is it okay to understand the resolution information in the first 120th second of Asakura, and then to reproduce the four pictures in the order, such as improving the density of light rays?
Omura Yes, that's right.
Asakura: This video is 30P, so you may see a flicker in the moving part, but this means that if the frame rate of the projector increases, it will be eliminated.So what is the specific project mechanism?
From the video taken with the 24 cameras of Omura, a 384 viewpoint video is created using production technology.It can be recognized as a three -dimensional by dividing these 384 images in six, throwing them out with six projectors, and overlapping them on the screen.
Asakura: Do you use a 4K projector in this regeneration machine?
We use two Omura 8K projectors and four 4K projectors.
Asakura: By synthesizing images taken from many perspectives, it looks three -dimensional.The idea itself seems to be the same as the integral method, but what is different?
Omura Integral is different from the image itself.In the case of the integral method, it will be a strange image if you look at it in 2D, but in the case of an active navigation, it can be recognized as a normal video taken from a different angle even if you watch it in 2D.
As an image, the integral system is overlapped with a multi -view image on the image itself, and the active navigation is a plurality of images of different angles and overlapping on the screen.
Asakura: The difference is whether to make a superimposed image with video processing or actually overlap on the screen.That means that the integral method can be simpler as a playback device.
Omura Certainly, the integral system is easier to make a projector mechanism.Active navigation is difficult to design a lens.Each has its advantages and disadvantages, but I think that active navigation is easier to resolve.
The Asakura Integral method does not achieve full HD resolution.Is this technology aimed at practical use in the 2040s?
Omura: In the 2040s, we aim to replace the current living room TV.The integral method is a little more personal viewing, and the target is to use it in the 2030s.
You attack Asakura three -dimensional video on a double track line.However, active navigation is also a research start in 2018, and the video of HD resolution has already been realized, so I can expect it.If HD is realized in two years, 4K seems to be quick (laughs).
Omura 4K is not easy.However, it is the advantage of Actnavision to be able to reproduce a beautiful 3D video, so I would like to do my best.
Kano: I will explain how to shoot the three -dimensional video you saw earlier.The configuration is 24 color cameras that record the video, and one color depth camera that records color and depth information.In front of the color depth camera, the depth is identified by placing a lens array and dividing the viewpoint four parts.
Asakura: The data taken with these 25 cameras becomes the original material.
Kano: Create a 384 viewpoint video based on this information.It is an image of inserting the video between the 24 color cameras and filling the gap.
Asakura In the future, will this be like a current TV camera?
I can't put it all in one Kano, but I think I can reduce the number of cameras later.
The Asakura subject should stand in front of 24 cameras and act.Is the color camera special?
The color camera for Kano depth is a special specification, but the other is a normal camera and the resolution is between 2k and 4K.
Asakura: If you increase the resolution here, can the resolution of the three -dimensional video be more detailed?
In order to create a 2K three -dimensional video using the common area of Kano video, the resolution of the camera here is not enough to create a 2K three -dimensional video.Of course, if it is 4K, it will be advantageous because you can afford it.
Asakura: Do you want to increase the number of cameras more in the future?
We would like to reduce the number of Kano cameras for practical use.However, if you increase the camera, the quality of the image will increase, so I think how to think about the balance there.
I think it would be difficult to create 384 images from Asakura 24 + 1 camera video, but if it is converted to 4K, the processing time for image generation will increase?
Kano: The load is larger than now, but the computer evolution is faster, so I don't think it will be a big problem.
Asakura's final goal is 8K shooting, right?
Kano: It's pretty good ...First of all, the full HD is a separation, and the next goal will be considered in the future.
8K broadcasting has already begun in Asakura, and users are expecting 8K three -dimensional images.
Kano resolution is also important, but I think it is necessary to conduct an evaluation experiment again about how much resolution is optimal when humans watch the three -dimensional video.
Asakura In the first place, even if it is 8K, you can often feel the depth even if it is a two -dimensional video.It is interesting to see what it looks like if you add this technology there.
Omura: That's right.We are an unknown area in that regard, but I think it is a very interesting theme.
Asakura Finally, 8K quality will be the goal, so please do your best.
Computational photography
"Computational photography" takes a 3D from one point of view with one camera.In the sense of a photo processed on a computer, it is a digital technology that can change the focus position after taking, remove blur, and partially super -resolution.R & D is now popular worldwide, but the goal of Giken is a three -dimensional camera.(Asakura)
Asakura: This exhibition was also very interesting online.It is interesting that you can change the focus of the taken image later.When did you start this research?
Muroi started discussions around 2017 and started research in 2019.Since then, similar research has begun in various places.
Asakura If you shoot with this method, you can freely send focus later by performing image processing.How do you apply it to the broadcast?
There is a question about how to display Muroi, but the biggest goal is to take the original data in three dimensions, so you want to take it to the three -dimensional display.
Because it is a hologram, it has a relatively simple structure for an optical system that uses a prism to create interference stripes and shoot them.However, the image sensor is taken using a special image sensor called SCMOS.
Asakura shoots is a so -called interference stripes, and based on that, the image is reproduced by post -processing.In the previous method, laser light was used for lighting, but this time it is different.
Muroi Yes, this time is that the lighting is that the lighting is normal LED lighting, a parco -healed light (a light wavelength range is mixed, and the direction to move is different = natural light).
If you want to shoot Asakura Interferries, the so -called coherent laser light, which gathers the same wavelength, is more advantageous, but this time the point is that a parakeet -hent lighting is good.
Muroi: Laser cannot be used when shooting humans, so I think that a system that can be taken with lighting close to natural light will be required.The optical system itself is the same for both lasers and natural light, but in the case of parakeet healer light, it is difficult to shoot interference stripes unless the optical system is combined with the accuracy of 10 micrometers or less.
What is the mechanism that reproduces the Asakura video?
Muroi Upa Changes the four interference stripes taken taken to create one image.We calculate the amplitude information and phase information from the four pictures, but the complex amplitude hologram image has 3D information, so if you enter the distance from the lens on the processing software, the focus will fit there.You can reconstruct images.
There was a site where you could freely adjust the focus on the Asakura online exhibition, but that was the case.In the future, do you want to make the optical system compact and make it a normal camera size?
Muroi, of course, I want to be able to shoot videos.
If you want to shoot Asakura Interferries, you can use monochrome, but when it comes to commercialized, it must be a color video.
Muroi Currently, the wavelength filter is green, so I shoot in monochrome, but if I can use a color filter, I think that I can shoot in color.
Asakura Where is the future development theme?
Muroi First, I want to reduce noise.This will make it possible to apply to the video.Now, I am taking four images separately, but I would like to develop a way to shoot this in bulk.
I am looking forward to the future progress, such as Asakura videos, coloring, and compact cameras.We look forward to realizing an integrated 3D camera.
Click here for the second part
The "Giken Release 2021 Online", which is being held until June 30 (Wednesday), introduces many research presentations for audio visual fans.In the previous series, we introduced two techniques that Mr. Asakura focused on.This time, which is the second part, we will deliver two more themes.One is a immersive VR display using a flexible organic EL panel, and the other is quantum dot technology used as a light emitting element.Both are important studies that may change the display scene of the near future.Introducing the details.(Editorial Department)
* Giken release 2021 Online is released until the end of June ↓ ↓
"Giken Open 2021", which opens the latest research results of NHK Broadcasting Technology Research Institute (Giken), online on this web page for one month from June 1 (Tuesday) to June 30 (Wednesday). It will be held.