0
8 / 12
Mar 12 3 月 12 日

I recently purchased the SpatialLabs monitor - the pro version and was wondering if anyone had experience with this or something similar. The display works with 2 eye tracking cameras and a lenticular display to show a 3 dimensional image without requiring glasses. The software is relatively barebones but so far I have been able to view STL and OBJ files successfully. There is also developer notes of how to integrate with OpenXR and Unreal engine but I have limited experience with those.
我最近购买了 SpatialLabs 显示器 - 专业版,并想知道是否有人有使用这个或类似产品的经验。该显示器使用 2 个眼动追踪摄像头和一个透镜显示器来显示一个不需要眼镜的三维图像。该软件相对简单,但到目前为止,我已成功地查看了 STL 和 OBJ 文件。还有开发者笔记,说明如何与 OpenXR 和虚幻引擎集成,但我对这些方面的经验有限。

I would like to find a way to view volume renderings in the display somehow., since that would allow viewing in the OR without 3D glasses or other specialized hardware. I found some prior forum posts about OpenXR integration into 3D slicer but I’m not sure what had come of it. Basically seems like I have to make the program think that its output is either going to a VR headset or a stereoscopic display. Any suggestions?
我想找到一種方法以某種方式在顯示器上查看體積渲染,這樣可以在手術室中無需 3D 眼鏡或其他專門硬件的情況下進行查看。我在之前的論壇帖子中找到了一些關於將 OpenXR 集成到 3D 切片器中的信息,但我不確定這方面的進展如何。基本上,似乎我需要讓程序認為其輸出要麼是去 VR 頭盔,要麼是去立體顯示器。有什麼建議嗎?

Recent efforts in stereo viewing focused on AR/VR headsets, because these devices made most 3D displays obsolete. Headsets are more portable, cheaper (Meta Quest headset is $300-500), provide larger field of view (full immersion), and offer full 6-degree-of-freedom viewing and interaction. When you need to see the surrounding real world then you can use an AR headset, such as the HoloLens. Both AR and VR headsets are usable in Slicer view the SlicerVirtualReality extension 1.
最近在立體視覺方面的努力主要集中在 AR/VR 頭戴式設備上,因為這些設備使大多數 3D 顯示器變得過時。頭戴式設備更加便攜、價格更便宜(Meta Quest 頭戴式設備價格為 300-500 美元),提供更大的視野(完全沉浸式),並提供完整的六自由度視覺和互動。當您需要看到周圍的真實世界時,您可以使用 AR 頭戴式設備,例如 HoloLens。AR 和 VR 頭戴式設備都可以在 Slicer 視圖中使用 SlicerVirtualReality 擴展 1。

For headset-free viewing you can use holographic displays, such as the LookingGlass 1, which is already supported by Slicer. Holographic displays have the advantage that they can be viewed by many viewers (no head tracking is needed).
對於無需戴上耳機的觀看,您可以使用全息顯示器,例如 LookingGlass1,該顯示器已經得到 Slicer 的支持。全息顯示器的優點是可以被多個觀眾觀看(無需頭部追蹤)。

Single-user 3D displays don’t really have much place anymore other than maybe they can try to compete in price or image resolution. I think such monitors (such as the zSpace) are already usable with Slicer (maybe via SlicerVirtualReality extension?). If SpatialLabs provide OpenXR interface then you can probably use SlicerVirtualReality extension. Otherwise you can ask the manufacturer to contact Kitware about how they could add VTK support to their monitor.
單用戶 3D 顯示器除了可能在價格或圖像分辨率上競爭外,實際上已經沒有太多的用途了。我認為這樣的顯示器(例如 zSpace)已經可以與 Slicer(也許通過 SlicerVirtualReality 擴展)一起使用。如果 SpatialLabs 提供 OpenXR 接口,那麼您可能可以使用 SlicerVirtualReality 擴展。否則,您可以要求製造商與 Kitware 聯繫,了解他們如何將 VTK 支持添加到他們的顯示器中。

I appreciate the response…I’ve used the looking glass, but hardware requirements are steep and price rises dramatically once you move beyond the Portrait device. The spatial displays seem like a better intra-operative option, since resolution and performance are better on most hardware and there is no need to wear specialized glasses or VR headsets. If you haven’t tried using one of the eye-tracking monitors I highly recommend it. Main drawback is that it’s 1 viewer at a time, but thats less of a concern using in the OR as reference.
我感謝您的回應...我已經使用了這個鏡子,但硬體要求很高,一旦您超出了 Portrait 設備,價格就會大幅上升。空間顯示似乎是更好的手術內選項,因為大多數硬體的解析度和性能都更好,而且不需要佩戴專用眼鏡或 VR 頭盔。如果您還沒有嘗試過使用眼球追蹤顯示器,我強烈推薦您試試。主要缺點是一次只能有一個觀眾,但在手術室中作為參考,這不是太大的問題。

Will definitely try your suggestions. The volume renderings give a lot more detail than the segmentations (at least when I do it) and would be great to see how it looks on there.
一定會試試你的建議。體積渲染比分割提供更多細節(至少在我這樣做時是如此),很想看看它在那上面的效果如何。

Slicer can now display volume rendering directly in the HoloLens2. The huge advantage of augmented reality headsets compared to 3D displays is that you can place the volume inside the patient at the correct physical location and use it for guidance. Simply manually aligning the visible skin surface can be sufficiently accurate for larger, superficial targets. There are software libraries for automatic alignment and more accurate tracking, but those are not yet integrated into Slicer.
切片機現在可以直接在 HoloLens2 中顯示體積渲染。增強現實頭戴式設備與 3D 顯示器相比的巨大優勢在於,您可以將體積放置在患者的正確物理位置並用於引導。僅僅手動對齊可見的皮膚表面對於較大的表面目標來說已經足夠準確。有軟件庫可用於自動對齊和更準確的跟踪,但這些尚未集成到切片機中。

Unfortunately it seems like resolution on the AR goggles isn’t quite there yet. The Ultraleap is pretty good but viewing window is small . Hololens 2 has similar issues but we have investigators who use it for telestrating during simple procedures. Use VR and AR (Quest 2 and Quest 3 passthrough) with MedicalHolodeck program for planning approach and 3D slicer for fast volume rendering or segmentation/printing.
不幸的是,AR 眼鏡的解析度似乎還不夠好。Ultraleap 相當不錯,但視窗很小。Hololens 2 也有類似的問題,但我們有調查人員在簡單手術過程中使用它進行示意圖。使用 VR 和 AR(Quest 2 和 Quest 3 通過)與 MedicalHolodeck 程序一起進行計劃方法和快速體積渲染或分割/打印。

Had been struggling with a good intraoperative solution for a roadmap-type reference, since most of my partners wouldn’t be willing to wear hardware/goggles in the OR. These seem to fit that need - have 2 cameras which t ack eye and head movements and adjust the view to your position. Illusion is very convincing and have trialed some segmentations with it and had good results. Problem is those take time and sometimes volume renderings get the job done. Version I’m using is Acer SpatialLabs View Pro, but there are nicer versions made by Sony and others.
一直在尋找一個好的手術中解決方案,以作為路線圖參考,因為我的大部分合作夥伴不願意在手術室戴上硬件/護目鏡。這些似乎滿足了這個需求-有 2 個攝像頭可以追蹤眼睛和頭部的移動並根據您的位置調整視角。這種幻覺非常令人信服,我已經用它進行了一些分割,效果很好。問題是這些需要時間,有時候體積渲染就能完成工作。我使用的版本是 Acer SpatialLabs View Pro,但索尼和其他公司也有更好的版本。

Looking glass product is excellent (I have a LKG Go preordered), but cost/size is an issue and requires significant graphics hardware to render from so many angles. It’s the only non-goggles solution I’m aware of with group viewing though.
鏡子產品非常出色(我已經預訂了一個 LKG Go),但成本/尺寸是一個問題,需要大量的圖形硬件才能從這麼多角度渲染。然而,這是我所知道的唯一一個非眼鏡解決方案,可以進行團體觀看。

Thanks again for your response. I don’t have any programming experience so your replies here and in the forums have been very helpful. If you know of any AR overlay capabilities for laparoscopy I would be very interested to hear about it since most of my cases are done that way (pediatric surgery).
再次感謝您的回應。我沒有任何編程經驗,所以您在這裡和論壇上的回覆對我非常有幫助。如果您知道任何關於腹腔鏡手術的增強現實覆蓋功能,我會非常感興趣聽聽,因為我大部分的病例都是這樣進行的(小兒外科)。

The HoloLens2 is excellent for in situ visualization. Both the field of view and resolution is sufficient.
HoloLens2 非常適合現場視覺化。視野和解析度都足夠。

If you just want to display 3D images somewhere above the patient then a 3D monitor is usable for that. However, stereopsis is only one of the many depth cues (you also perceive depth from lighting, motion, occlusion, size, texture, etc), so while using a stereo display improves depth perception, it is not a game changer. The proof for this is that despite the 3D TV boom in the early 2010s when stereo displays were available at a very low pricepoint at many sizes, using various technologies (including active glasses, passive glasses, and without glasses), it still did not get traction in clinical use. It is still possible that 3D monitors will make a comeback (maybe because you find some really good applications), but right now it seems that augmented reality headsets have more potential.
如果您只是想在患者上方的某个地方显示 3D 图像,那么 3D 监视器是可用的。然而,立体视觉只是许多深度线索中的一个(您还可以从照明、运动、遮挡、大小、纹理等方面感知深度),因此使用立体显示可以改善深度感知,但并不是一个改变游戏规则的因素。这一点的证据是,尽管在 2010 年代初 3D 电视繁荣时,立体显示器以非常低的价格在许多尺寸上可用(包括主动眼镜、被动眼镜和无眼镜技术),但在临床使用中仍未获得推广。3D 监视器仍有可能复兴(也许是因为您找到了一些非常好的应用),但目前看来,增强现实头戴设备具有更大的潜力。

I might just have to give the Hololens 2 another try then. Still fairly cumbersome in the OR but not too many options. An I’ve learned that the only way to really assess these devices is to wear them yourself. The listed display resolution is 1440x936, whereas the one I’m using is a 4K display with each eye field receiving 1920x1080. How detailed are these images when viewing through the device? The best use cases in pediatrics are things like conjoined twins and tumors with complex, irregular vasculature. Those can be really painful to segment, and some of the vessels are 1-2 mm in diameter (the IVC on some of these kids is around 1-1.5 cm).
我可能得再試試 Hololens 2。在手術室裡仍然相對笨重,但選擇不多。我已經學到,真正評估這些設備的唯一方法就是親自戴上它們。所列的顯示解析度為 1440x936,而我使用的是每眼視野都接收 1920x1080 的 4K 顯示。通過設備觀看時,這些圖像有多詳細?在小兒科中,最好的使用案例是像連體雙胞胎和具有複雜、不規則血管結構的腫瘤。這些可能非常痛苦,且一些血管的直徑為 1-2 毫米(這些孩子的下腔靜脈直徑約為 1-1.5 厘米)。

The view on the spatial display reminds me a little of the old active shutter glasses and 3D, especially in programs that aren’t optimized for stereoscopic display (looks like a bunch of cardboard cutouts moving in parallel). Also if you don’t optimize focal length it’s easy to end up cross-eyed. 10 years ago I owned both a 3D vision monitor and a 3D television , but the technology seems to have improved significantly since then. Again thanks for your insight - not much experience with this kind of thing in pediatric surgery world so its nice to discuss with someone who knows this stuff.
空間顯示的觀感讓我有點想起舊的主動式快門眼鏡和 3D 技術,尤其是在沒有為立體顯示進行優化的程式中(看起來像一堆平行移動的紙板剪影)。而且如果不優化焦距,很容易讓人產生斜視的感覺。10 年前我擁有一台 3D 視覺顯示器和一台 3D 電視,但自那時以來,技術似乎有了顯著的改進。再次感謝您的見解-在小兒外科領域中對這種事情沒有太多經驗,所以能夠與了解這方面知識的人討論真是太好了。

How useful do you find the additional depth cue of the 3D monitor? What is that you can see on the 3D monitor that you cannot already see on a regular 2D monitor? You can perceive depth more directly and slightly move your head to see a bit behind structures, but I’m wondering if this is something really significant.
你覺得 3D 顯示器的額外深度線索有多有用?在 3D 顯示器上,你能看到什麼在普通 2D 顯示器上看不到的?你可以更直接地感知深度,稍微移動頭部就能看到結構物背後的一點,但我想知道這是否真的很重要。

Regardless of resolution, price, ease of use, etc. - 3D monitors cannot compete with the HoloLens unless they make the image appear as floating inside the patient. The main challenge in using the HoloLens is not about image quality, but
無論是解析度、價格、使用便利性等等,3D 顯示器都無法與 HoloLens 競爭,除非它們能使圖像看起來像是漂浮在病人體內。使用 HoloLens 的主要挑戰不在於圖像品質,而是

  • how conveniently (and quickly and accurately) you can align the virtual model with the patient
    您可以多么方便(快速和准确地)将虚拟模型与患者对齐
  • how to keep the model position and shape up-to-date during the procedure as things move around
    如何在过程中保持模型的位置和形状与物体的移动保持最新状态
  • how to interact with the device: there are many options - hand gestures, voice commands, controller in sterile bag - but each has its own limitations
    如何與設備互動:有很多選擇 - 手勢、語音指令、放在無菌袋中的控制器 - 但每種方式都有其自身的限制
  • how to wear it: it is quite comfortable, but you may still want to flip it up or remove from your head beause it still darkens the view a little bit or may add some extra glare, and may be in the way if you want to use a microscope
    如何佩戴它:它非常舒適,但您可能仍然想要翻起它或從頭上取下,因為它仍然會稍微遮暗視野或增加一些額外的閃光,並且如果您想使用顯微鏡,它可能會妨礙到您

Still, if you think seeing the 3D model inside the patient in the correct physical location could be useful then it is worth a try. You would need a technician to help with it in the OR (prepare the visualization, put and remove the device from your head, help with controls, etc.)
然而,如果您認為在正確的物理位置內看到患者內部的 3D 模型可能會有用,那麼值得一試。您需要一名技術人員在手術室協助(準備可視化,放置和移除設備,協助控制等)。

From what I’ve seen, just putting the volume renderings or segmentations on the OR monitors can be difficult to interpret unless you’re able to move the model. Our current solution to this is to connect our laptop workstation (RTX 4070 with 64 GB RAM) to the DVI or HDMI port on the boom in the OR. The image can then be shown through as many displays as we would like in the OR, usually between 2 and 4. Movement of the model is done using a Leap Motion 2 controller, which allows you to manipulate the model and preserve sterility. What the 3D display does is minimize the amount of manipulation required to gain an understanding of the image.
從我所看到的情況來看,只是將體積渲染或分割顯示在手術室監視器上可能很難解讀,除非您能夠移動模型。我們目前的解決方案是將我們的筆記本工作站(搭配 RTX 4070 和 64 GB RAM)連接到手術室的 DVI 或 HDMI 端口上。然後,圖像可以通過手術室中的多個顯示器顯示,通常在 2 到 4 個之間。使用 Leap Motion 2 控制器來移動模型,這樣可以讓您操縱模型並保持無菌。3D 顯示的作用是最大程度地減少了理解圖像所需的操作量。

I think we might be using these for different things. For us, AR overlay during open surgery may not add much because the structures are small in general, and you can usually identify the limits of solid organ tumors on palpation. The challenging open situations for us are ones in which there are complex networks of aberrant vessels within a tumor. The visual fidelity on the Hololens may not be enough to overlay multiple 2-3 mm vessels and follow with manipulations of the tumor about its axis (but I would be happy to be proven wrong).
我認為我們可能在使用這些東西時有不同的用途。對我們來說,在開放手術中進行 AR 覆蓋可能並不會增加太多,因為結構通常很小,而且通常可以通過觸診來確定實質器官腫瘤的範圍。對我們來說,具有挑戰性的開放情況是在腫瘤內存在複雜的異常血管網絡的情況。Hololens 上的視覺保真度可能不足以覆蓋多個 2-3 毫米的血管,並且在腫瘤周圍軸向的操作中進行跟隨(但我很樂意被證明錯誤)。

For us the focus is on the preoperative planning phase or providing a roadmap for reference in the OR . The VR headsets provide good pictures for the first part but are not great for the second. The other consideration for us is that a lot of our complex cases are done under magnification with loupes on. Transitioning to the Hololens and back throughout the case would be cumbersome and might be difficult to maintain sterility. I’m curious how you and others have dealt with this in the past.
對我們來說,重點是在手術前的規劃階段,或者提供一個手術室參考的路線圖。虛擬實境頭盔可以提供良好的圖像,但對於第二部分來說並不理想。對我們來說,另一個考慮因素是,我們很多複雜的病例都是在放大鏡下進行的。在整個手術過程中切換到 Hololens 可能會很麻煩,並且可能難以保持無菌。我很好奇你和其他人過去是如何處理這個問題的。

Do you have particular cases where you find this to be especially helpful? There is some overlap between adult and pediatric surgery but I’m always interested in finding new ways to improve how we do things. A lot of our complex cases are done under laparoscopy or robotically (choledochal cysts, anorectal malformations, etc). AR solutions that provide overlay data during laparoscopy or open with under 2.5-3.5x magnification would be ideal, but I don’t think those technologies exist yet. Thanks again for answering my questions. I’m still learning as I go here.
您是否有特定案例,您認為這對您特別有幫助的情況?成人和小兒外科之間存在一些重疊,但我一直對尋找改善我們工作方式的新方法感興趣。我們許多複雜案例都是通過腹腔鏡或機器人手術進行的(膽管囊腫,肛門畸形等)。在腹腔鏡手術或開放手術中提供覆蓋數據的 AR 解決方案,並具有 2.5-3.5 倍的放大倍數將是理想的,但我認為這些技術目前尚不存在。再次感謝您回答我的問題。我還在學習中。

This is indeed useful. I would just add that there are many other ways to improve depth cues or make the images easier to interpret. For example, we recently added colored volume rendering and ambient shadows (see some example images here 2 and here 2), which can be used in addition or instead of stereo volume rendering to greatly improve understanding of the 3D renderings.
這確實很有用。我只想補充說,有很多其他方法可以改善深度線索或使圖像更容易解讀。例如,我們最近添加了彩色體積渲染和環境陰影(在這裡 2 和這裡 2 可以看到一些示例圖像),可以用來大大提高對 3D 渲染的理解,可以與立體體積渲染一起使用或替代。

I agree, these are two quite distinct use cases. The HoloLens is already proven to be useful for large and superficial targets, for low-accuracy applications (e.g., give confidence to surgeons in determining skin incison location), but may not be ideal for microsurgeries.
我同意,這兩個用例是相當不同的。HoloLens 已經被證明對於大型和表面目標非常有用,對於低精度應用(例如,給予外科醫生在確定皮膚切口位置時的信心),但對於微創手術可能不是理想的選擇。

Lots of solutions were developed for displaying image overlay in laparoscopes or microscopes in the past 20 years, but they have not become widely used clinically - probably because they did not work that well in practice. Seeing recent progress in imaging AI, it is quite likely that real-time AI image annotations will become available in products of all the large laparoscopy vendors within a few years.
在過去的 20 年裡,已經開發了許多解決方案來顯示腹腔鏡或顯微鏡中的圖像覆蓋,但它們在臨床上並未被廣泛使用 - 可能是因為它們在實際應用中效果不佳。鑑於最近在影像人工智能方面的進展,很有可能在幾年內所有大型腹腔鏡供應商的產品中都會提供實時的人工智能圖像註釋。

To add AR to surgical loupes, maybe the easiest solution could be to use digital loupes (like nuloupes or mantis) and external tracking.
要將擴增實境應用於手術放大鏡上,可能最簡單的解決方案是使用數位放大鏡(如 nuloupes 或 mantis)和外部追蹤技術。

2 months later 兩個月後

Have been intermittently working on making volume rendering output translate to this monitor but without success at the moment. I use a program called MedicalHolodeck for viewing volume renderings in VR with excellent results. If I create segmentations I can view them easily using the viewer included by SpatialLabs. The quality of the volume rendering images I get in 3D Slicer is excellent, but there are just limits to viewing 3D images on a 2D screen, even with the additions to improve depth cues.
一直在努力讓體積渲染輸出能夠在這個顯示器上顯示,但目前還沒有成功。我使用一個叫做 MedicalHolodeck 的程式,在 VR 中觀看體積渲染,效果非常好。如果我創建分割,我可以使用 SpatialLabs 提供的觀看器輕鬆地查看它們。我在 3D Slicer 中得到的體積渲染圖像的質量非常好,但在 2D 屏幕上查看 3D 圖像仍然有限,即使增加了深度提示的功能。

I’ve reached out to the display company to see if they have any ideas, but I don’t have high hopes for that one. They do have extensions for Unreal Engine, Unity, and NVIDIA Omniverse that allow you to view images within the editor in stereoscopic 3D on the monitor. So it definitely seems possible. Since the VR extension to 3D slicer allows for OpenXR output it seems like there should be a way to transfer that to the display.
我已經聯繫了顯示器公司,看看他們是否有任何想法,但我對這一點並不抱有太大的希望。他們確實有 Unreal Engine、Unity 和 NVIDIA Omniverse 的擴展,可以讓您在編輯器中以立體 3D 在顯示器上查看圖像。所以這絕對是可能的。由於 VR 擴展到 3D 切片器允許 OpenXR 輸出,似乎應該有一種方法將其傳輸到顯示器上。

If you haven’t tried one of these types of displays, I highly recommend it. The clarity is much better than the smaller holographic displays (haven’t gotten to try the large ones for budget reasons) and the hardware requirements are significanty reduced. Most importantly, it avoids the need to wear glasses or VR goggles to get a true 3D image. It even alters the perspective on the image based on your head and eye movements. The only major drawback is that only one user can view it at a time since it relies on eye tracking, in which case the hologram is a better option.
如果你还没有尝试过这些类型的显示器,我强烈推荐你试试。它的清晰度比较小的全息显示器要好得多(由于预算原因,我还没有尝试过大型的全息显示器),而且硬件要求也大大降低了。最重要的是,它不需要戴眼镜或虚拟现实眼镜就能获得真正的 3D 图像。它甚至可以根据你的头部和眼睛运动改变图像的视角。唯一的主要缺点是只有一个用户可以同时查看,因为它依赖于眼球追踪,这种情况下全息图是一个更好的选择。

I may just have to make my own extension to 3D Slicer to make this possible, but I don’t even know where to begin with that. Any help is appreciated, as always.
我可能只能自己製作一個擴展到 3D Slicer,以實現這一點,但我甚至不知道從何處開始。一如既往,感謝任何幫助。

If SpatialLabs supports OpenXR then you don’t need to do anything, Slicer can already show 3D volume rendering on those screens! You can configure 3D display on that screen similarly to all other OpenXR-compatible displays as described here 1.
如果 SpatialLabs 支持 OpenXR,那么您不需要做任何操作,Slicer 已经可以在这些屏幕上显示 3D 体积渲染!您可以按照这里 1 所描述的方式,将该屏幕配置为与所有其他兼容 OpenXR 的显示器类似的 3D 显示器。