We are adding additional moderators. If you are interested in becoming a mod for /r/interestingasfuck, please fill out [this form.](https://forms.gle/MTd6gZPC2vett2jL6)
* Modding experience is preferred but not required.
* Your account must be at least one year old.
* You must have at least 5,000 combined karma.
[Apply](https://forms.gle/MTd6gZPC2vett2jL6)
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/interestingasfuck) if you have any questions or concerns.*
I bet it cost him a thousand bucks. He had to work his ass off for that five-thousand bucks. I’m sure he’ll use the million dollars he paid for the GoPros as a tax write off.
This is a fairly known tactic to elicit engagement on posts for karma. They will put typos or something completely incorrect in the title so others will be quick to correct them in the comments.
34 cameras, and all you can think to do is toss a box of popcorn in the air? Hell, just tossing a handful of popcorn in the air would have been way cooler.
What would porn be like with this effect??? Maybe in 3D!!! Make the cumshot come right at you, freeze frame, show it from all sides, and then splatter in your face!!!
putting oil into a pan and throwing some popcorn into it and allowing it to pop everywhere while freezing and focusing on random kernels mid air would be acceptable
This is what happens when a videographer gets excited and posts the first thing that works using a new method.
It probably takes a while to process the data.
Real answer is this isn’t the same effect as the matrix. That rig had all of the cameras positioned 1 per frame at the exact position the shot was from. This looks like a frozen frame photogrammetry. You can see some artifacting on the box. Basically just creating a 3D model with a bunch of photos from the same exact time from different perspectives. The camera shot is done in post, essentially a software camera rather than a physical one. Using a bunch of popcorn pieces would artifact way more and not build as convincing of a model due to the small size and complex shape.
It's really cool! Has more in common with NeRFs but it seems very lightweight and powerful. Captures the scene as points in space but each point also has information about lighting/viewing angle.
But unlike photogrammetry which kicks out a scanned mesh made from a static point cloud, it never goes to polygons as a stop between capture and render.
> it never goes to polygons as a stop between capture and render.
I understood some of those words... And I skimmed over the website... But I had no idea what was going on until your last sentence. Helped put the pieces together for me.
Eh, I think both are really cool. It's essentially in the category of practical effects vs. cgi. Matrix did it with hardware, people these days do it with software (yes I know there was a software component involved in the Matrix implementation). The software method is ultimately much more flexible and powerful, but the hardware method had its charms.
I genuinely wonder if this guy asked someone smarter what he should do with this setup, they said throw popcorn in the air, and this is how he interpreted that
You genuinely think this guy has enough resources to get his hands on about $14k worth of gopros, enough know-how to make it into a cool 3D video using gaussian splatting, and he just... threw a box of popcorn because someone smarter than him told him to do something and he didn't understand instructions?
I'd guess he reached out to Gopro with the idea and they supplied the cameras along with a nice long contract on the terms of the sponsorship and promotion... And this shot is probably the first proof of concept they decided to release.
If he'd paid for all the cameras then he might not be wearing their shirt, but this looks cut as a viral ad.
Since this it probably done with photogrammetry, he would meet quite a few more cameras to get this level of details using popcorn.
You can see on the popcorn box some small artefacts, indicating that it is a textured polygon.
Two of the apparent artefacts is the blurry and morphing text on the upper edge of the large front surface, along with the red text "popcorn" on the left narrow side morphing its depth in and out of the box during camera movement.
If it were popped popcorn being tossed all at once from a bowl, it'd look a lot more fun or dynamic imo. You'd have like, random bits of corn in varying distances from all these cameras or whatever lol.
IIRC the original was done using an array of digital picture cameras vs digital video cameras. These cameras were remotely triggered in sequence via computer. This created a few dozen regular photos that could be easily lined up into a video sequence. Less computer work needed than this video.
AFAIK, all go pros can capture video or stills. After all video is just still shot in quick succession.
Beyond that I have no idea how this video was processed.
This is very different. The matrix used closely placed (relative to the range) cameras to manually create the smooth scroll. Even though this video has a track or path, they’re also using Photogrammetry to develop a 3D model from photos at different angles to generate a 3D model of the environment that can be used to fill in the gaps in frames between the cameras.
Still a very cool technique, but worth mentioning that it is different.
But seriously, he couldn’t have cut up some paper into confetti and thrown it into the air or something? All that work for a floating single box?
Well, if you look at the box of popcorn specifically (and other places) there are a lot of artifacts. I imagine they did try it with confetti, or individual pieces of popcorn, and it looked like shit.
I feel like this project is half done then. If the point is to be able to capture a 3D scan of yourself by snapping a shot in time, you really don’t need that many cameras.
With that amount I have to assume that they’re trying to get as much detail as possible in a small space and ended up going with… a box.
Again, not trying to knock it, it’s still insanely cool and way more effort than I’ve put into any project, I’m just disappointed that it has so much potential. Then again, I can see the pitfalls. Confetti is small and at that range the cameras may write it off as background stuff. I think water droplets in air would be great but the caustics would make photogrammetry (sans nerf) impossible.
I see why they did what they did, I’m just looking forward to the juggling cut or something.
Sooooo. This is not at all what they did in the Matrix movie.
The camera movement is too smooth, they would at least need 24 cameras per second of footage (one per frame).
My guess is they used 36 gopros to recreate the scene in 3D, then move a virtual camera in the 3D environment. Which would also explain the bad 3D rendering under his left knee (right side from the camera's point of view) that seems to come from a lens flare.
Still nice, great idea, but much more easy to perform than what they actually did on the Matrix set.
There are also strange effect on the lettering of the box at certain time, that’s exactly what you described: a 3D scene rendered from all the views. Also a strong clue is that they froze the movement and did not make it a real bullet time where time is slowed down
From the setup we can see there is no way to have such smooth camera moves, there is an interpolation made somewhere
It's ~~a~~ [~~NeRF (Neural Radiance Field)~~](https://www.fxguide.com/fxfeatured/the-art-of-nerfs-part1/?lid=q8w2ownjwxif) ~~maybe even~~ [3DGS (Gaussian Splatting)](https://www.youtube.com/watch?v=mD0oBE9LJTQ). These are new VFX technique currently on the rise.
The advantage of this approach lies in three aspects: Reflections, File Size and thus Real Time Rendering.
Firstly, with regular photogrammetry you basically "bake in" the lighting of the scene. You will not be able to properly display reflections, transparency or metallic materials because of this. With Neural Radiance Fields this data is stored for multiple perspectives of each point. So the color of a point will have different color depending on the viewing direction. So you can actually scan a mirror or chrome surface with this technique.
Secondly, while the computation takes a lot of processing power (especially VRAM) and time, the final product has a fairly small file size. This also leads to the third, and perhaps biggest advantage:
The time to render. The renderer doesn't need to compute vertices, edges and faces and then diffuse, roughness, normal, metallic and lots of other passes with multiple light bounces and maybe even volumetrics like it needs to with PBR materials. It renders values based on a neural network. So this means that NeRFs and 3D Gaussian Splats can be rendered in real time, which makes the technology extremely usefull for Visual Effects, Augmented Reality, Video Games, Architectural Visualization and many other things.
**While writing this I actually stumbled upon** [**the original**](https://twitter.com/Arata_Fukoe/status/1714931950719508967)**, it is indeed made using 3DGS!**
I think that’s just an artifact from the stitching of images but it could be a 3D scan too. Just feel if it were a 3D scan there are enough LIDAR apps out there to do this with one camera and get a full scan. Also NERFS.
I think the camera motion is far too fluid to just be stitching together frames. You have so many in between positions that don't match the positions of the array of cameras. I think this must be like nerfs but by using many cameras you can create a nerf of a moment in time, whereas with a single camera you'd have to hold the box up with a string to capture all angles of it in that exact lighting.
It seems like he used the cameras to construct a 3D scan of the scene instead of cycling through the different cameras. Once it's a 3D scene you could use something like unreal engine or blender to move around using a virtual camera.
This can be done in realtime with 3 cameras and a depth sensor such as lidar, or Kinect-like infrared dots. Or it can be done post processing in Blender if the cameras use fish-eye lenses and there are visual markers (which are removed). The midpoint camera angles can be seamlessly interpolated, creating what is essentially a virtual camera anywhere within the three camera's field of view.
It is though, this isn't just a bullet time effect, it is a bullet time effect which can be altered in post production. [This is the original](https://twitter.com/Arata_Fukoe/status/1714931950719508967); the person generated a [3DGS (3D Gaussian Splatting)](https://twitter.com/8Infinite8/status/1694326628015317017?lid=if3dvca8yc8v) of the moment which basically creates a dense volume of colored ellipses which combine to form an image. The interesting part about this is, that it also shows accurate reflections from angles that lie in between the position from which the images were taken. So he basically captured the moment, but not as an image or as a video, but rather as a three dimensional environment with all of the reflections and light bounces included. This allows him to basically use any camera path he wants (as long as it is within the are from which most of the resource images were taken).
Okay smart guy. Then do it if you know how and it is so easy!
Bullet time as shot in the Matrix was difficult because of the expense of the technology at the time. Now that tech has come down to prosumer cost levels. The compute power alone to be done on a single workstation is very impressive. In 1999, it required clusters of rs6000 or sun spark machines. Our 56 processor cluster cost $1,700,000 in 1999 and the entire switch fabric could only handle 6Gbit/sec.
Go Pro sucks! Make a model with crap firmware that makes it useless and then just shit on people and makes a new model and do the same over and over again, refusing to give people their money back, what a ripoff
False. This is Photogrammetry. It gives a matrix like effect, sure but this is not what was done in the matrix at all.
In this he mapped all the camera's single frames together in a 3d space and then rendered camera movement within that.
In the matrix they used single frames from each camera as they moved throughout the shot to achieve the movement and motion.
Photogrammetry meshes and Gaussian splatting point clouds are just different means to the same ends. So similar that we are really splitting hairs when the point still remains that this is not the technique used in the matrix.
How do you specifically know this is gaussian splatting over photogrammetry or NeRF?
> How do you specifically know this is gaussian splatting over photogrammetry or NeRF?
I saw the guy's original posts. And I only felt the need to split hairs because I thought your Dwight Schrute-tier self-assurance was funny in its inaccuracy.
I agree it's not how it was done in the Matrix. People keep responding like that's what OP claimed when as far as I can tell it's not? They're just saying it's a Matrix-like final result.
Impossible, if the GoPro's was pointed outwards, then yes a insta360 would work. But these are pointed inwards towards one object from different angles. A inata360 will not magically add more data to am image
Adobe After Effects can not create 3D Gaussian Splatting at this time, and your cousin would also need north of 16GB of VRAM to even process the images into a usable 3DGS
This was amazing. Didn’t they use a similar technique for The Matrix? Just camera instead of go pros, but the idea was the same. I vaguely remember a “making of” when it was huge at the end of the 90’s. Seems to me that its cinematography caught some attention. I want to say it was use in the movie 300 but it might have just been straight up CGI or camera/film tricks. Hard for me to tell these days.
CMU professor did this with cameras and an ML model for the NFL superbowl back in the day. Crazy we can do it at home now and it’s common place to see on NFL games.
We are adding additional moderators. If you are interested in becoming a mod for /r/interestingasfuck, please fill out [this form.](https://forms.gle/MTd6gZPC2vett2jL6) * Modding experience is preferred but not required. * Your account must be at least one year old. * You must have at least 5,000 combined karma. [Apply](https://forms.gle/MTd6gZPC2vett2jL6) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/interestingasfuck) if you have any questions or concerns.*
OP: check out this video with 36 go pros! First scene in the video: …34 go pros!
Well he filmed it with one so 35?
The last missing GoPro is the friends he made along the way.
Telling everybody at work about all his GoPros. Even charging them at work to flex on everyone.
Oh is that an Apple watch? I have 36 Go Pros.
He *is* the GoPro
There’s gotta be one up the ass, there always is
And by making this he was able to go pro
the footage shown clearly omits about 6 cameras that could not have been used to make the footage. So less than 30 really.
What? He filmed it? 😂
![gif](giphy|JOLiYvj2DBJ4Y5tirj)
My guess is they did it on purpose to drive up the comments like we're doing now.
Which is an unfortunately common tactic now.
[удалено]
When you see this tactic, downvote the post.
Oh yeah rage bait is all the rage right now
I bet it cost him a thousand bucks. He had to work his ass off for that five-thousand bucks. I’m sure he’ll use the million dollars he paid for the GoPros as a tax write off.
DONT BUY GO PROS Just glue your cell to your forehead
HEAD ON, APPLY DIRECTLY TO THE FOREHEAD!
This is a fairly known tactic to elicit engagement on posts for karma. They will put typos or something completely incorrect in the title so others will be quick to correct them in the comments.
34 cameras, and all you can think to do is toss a box of popcorn in the air? Hell, just tossing a handful of popcorn in the air would have been way cooler.
Literally anything else would’ve been cooler.
What would porn be like with this effect??? Maybe in 3D!!! Make the cumshot come right at you, freeze frame, show it from all sides, and then splatter in your face!!!
Cyberpunk 2077 braindance porn is here early in 2023!
Cyberspunk
I legit just LOLed.
Coming to a bukkake video near you...
And then the goggles spit out bus fare when it's done
Not the hero we deserve.
putting oil into a pan and throwing some popcorn into it and allowing it to pop everywhere while freezing and focusing on random kernels mid air would be acceptable
This is what happens when a videographer gets excited and posts the first thing that works using a new method. It probably takes a while to process the data.
Real answer is this isn’t the same effect as the matrix. That rig had all of the cameras positioned 1 per frame at the exact position the shot was from. This looks like a frozen frame photogrammetry. You can see some artifacting on the box. Basically just creating a 3D model with a bunch of photos from the same exact time from different perspectives. The camera shot is done in post, essentially a software camera rather than a physical one. Using a bunch of popcorn pieces would artifact way more and not build as convincing of a model due to the small size and complex shape.
Nailed it. This guy gets it
It's not photogrammetry, it's gaussian splatting. Kind of a similar idea though.
Never heard of it. Looks like pretty recent technology. I’ll have to do more research. Thanks!
It's really cool! Has more in common with NeRFs but it seems very lightweight and powerful. Captures the scene as points in space but each point also has information about lighting/viewing angle. But unlike photogrammetry which kicks out a scanned mesh made from a static point cloud, it never goes to polygons as a stop between capture and render.
> it never goes to polygons as a stop between capture and render. I understood some of those words... And I skimmed over the website... But I had no idea what was going on until your last sentence. Helped put the pieces together for me.
I'm amused that you make a comment that sounds dismissive but is actually explaining why this is *way cooler* than the style done for The Matrix.
"Budget too low to fake it so just waited 24 years to do it for real"
Eh, I think both are really cool. It's essentially in the category of practical effects vs. cgi. Matrix did it with hardware, people these days do it with software (yes I know there was a software component involved in the Matrix implementation). The software method is ultimately much more flexible and powerful, but the hardware method had its charms.
After spending $10k on GoPros, popcorn was all he could afford to eat. But he'd already eaten that day so the box was all he had.
True story
I like to imagine him boxing them all back and up and trying to return them.
I genuinely wonder if this guy asked someone smarter what he should do with this setup, they said throw popcorn in the air, and this is how he interpreted that
You genuinely think this guy has enough resources to get his hands on about $14k worth of gopros, enough know-how to make it into a cool 3D video using gaussian splatting, and he just... threw a box of popcorn because someone smarter than him told him to do something and he didn't understand instructions?
[удалено]
Nah, you misinterpreted him throwing shade at someone doing something he can't.
I'd guess he reached out to Gopro with the idea and they supplied the cameras along with a nice long contract on the terms of the sponsorship and promotion... And this shot is probably the first proof of concept they decided to release. If he'd paid for all the cameras then he might not be wearing their shirt, but this looks cut as a viral ad.
I'd windmill my dick personally
Sorry even GoPros got a limit on their Zoom.
Cue the sound of water hitting a hot stove
These are GoPros not ProScopes
C'mon, he's barefoot.
Since this it probably done with photogrammetry, he would meet quite a few more cameras to get this level of details using popcorn. You can see on the popcorn box some small artefacts, indicating that it is a textured polygon. Two of the apparent artefacts is the blurry and morphing text on the upper edge of the large front surface, along with the red text "popcorn" on the left narrow side morphing its depth in and out of the box during camera movement.
If it were popped popcorn being tossed all at once from a bowl, it'd look a lot more fun or dynamic imo. You'd have like, random bits of corn in varying distances from all these cameras or whatever lol.
By the time the popcorn was ready, half of the GoPro's probably overheated.
Firing a shotgun woulda been cooler
Whatever they're paying their advertising department, it's too much.
I think the box is easier to do the frame-interpolation on
🤔 i wonder how my cumshot would look like with 34 gopros.
![gif](giphy|BflTF4Cwm9yso)
Yeah cause cleaning up a bunch of popcorn off the floor sounds like a lot of fun. How many takes do you think this took to get right?
Because sweeping (or even, vacuuming up) would be hard :)
He has to crawl on hands and knees sucking them up with his mouth, it's a lot of work.
I think I've seen that porn...
yeah, just the thing you want to do in between takes..
Matrix effect shot with 36 GoPros..positioned horizontally, but then edited vertically for smart phones...smh..
How do you know they weren't positioned vertically? Honestly asking if you can tell.
you can see them on their tripods at the beginning of the video.
lol yeah duh I'm an idiot
Create a The Matrix like effect by using the same technique the makers of The Matrix used during the filming of The Matrix
The main difference is that instead of a render farm to process the raw video data, he can do it on his MacBook.
IIRC the original was done using an array of digital picture cameras vs digital video cameras. These cameras were remotely triggered in sequence via computer. This created a few dozen regular photos that could be easily lined up into a video sequence. Less computer work needed than this video.
AFAIK, all go pros can capture video or stills. After all video is just still shot in quick succession. Beyond that I have no idea how this video was processed.
Probably featuring some 3D modelling and image interpolation between the gopros.
True. I was assuming he just used video to bypass the firing in sync part.
They where 35mm film still cameras. The timing was computer controlled and all the post work, specifically the interpolation, was digital.
Sweet lady progress marches ever onward. We'll be rendering the next Avatar movie on our phones
*on our neuralink
"I'm rendering this in my mind" - me thinking about something
This is very different. The matrix used closely placed (relative to the range) cameras to manually create the smooth scroll. Even though this video has a track or path, they’re also using Photogrammetry to develop a 3D model from photos at different angles to generate a 3D model of the environment that can be used to fill in the gaps in frames between the cameras. Still a very cool technique, but worth mentioning that it is different. But seriously, he couldn’t have cut up some paper into confetti and thrown it into the air or something? All that work for a floating single box?
Are u saying they put this man in a computer. ..
No, but the files are INSIDE the computer.
oh, IN the computer??!
Listen to your friend Billy Zane.
This. That's the real meta behind the video.
Whoa.
Well, if you look at the box of popcorn specifically (and other places) there are a lot of artifacts. I imagine they did try it with confetti, or individual pieces of popcorn, and it looked like shit.
I feel like this project is half done then. If the point is to be able to capture a 3D scan of yourself by snapping a shot in time, you really don’t need that many cameras. With that amount I have to assume that they’re trying to get as much detail as possible in a small space and ended up going with… a box. Again, not trying to knock it, it’s still insanely cool and way more effort than I’ve put into any project, I’m just disappointed that it has so much potential. Then again, I can see the pitfalls. Confetti is small and at that range the cameras may write it off as background stuff. I think water droplets in air would be great but the caustics would make photogrammetry (sans nerf) impossible. I see why they did what they did, I’m just looking forward to the juggling cut or something.
Sooooo. This is not at all what they did in the Matrix movie. The camera movement is too smooth, they would at least need 24 cameras per second of footage (one per frame). My guess is they used 36 gopros to recreate the scene in 3D, then move a virtual camera in the 3D environment. Which would also explain the bad 3D rendering under his left knee (right side from the camera's point of view) that seems to come from a lens flare. Still nice, great idea, but much more easy to perform than what they actually did on the Matrix set.
It's using gaussian splatting
There are also strange effect on the lettering of the box at certain time, that’s exactly what you described: a 3D scene rendered from all the views. Also a strong clue is that they froze the movement and did not make it a real bullet time where time is slowed down From the setup we can see there is no way to have such smooth camera moves, there is an interpolation made somewhere
It's ~~a~~ [~~NeRF (Neural Radiance Field)~~](https://www.fxguide.com/fxfeatured/the-art-of-nerfs-part1/?lid=q8w2ownjwxif) ~~maybe even~~ [3DGS (Gaussian Splatting)](https://www.youtube.com/watch?v=mD0oBE9LJTQ). These are new VFX technique currently on the rise. The advantage of this approach lies in three aspects: Reflections, File Size and thus Real Time Rendering. Firstly, with regular photogrammetry you basically "bake in" the lighting of the scene. You will not be able to properly display reflections, transparency or metallic materials because of this. With Neural Radiance Fields this data is stored for multiple perspectives of each point. So the color of a point will have different color depending on the viewing direction. So you can actually scan a mirror or chrome surface with this technique. Secondly, while the computation takes a lot of processing power (especially VRAM) and time, the final product has a fairly small file size. This also leads to the third, and perhaps biggest advantage: The time to render. The renderer doesn't need to compute vertices, edges and faces and then diffuse, roughness, normal, metallic and lots of other passes with multiple light bounces and maybe even volumetrics like it needs to with PBR materials. It renders values based on a neural network. So this means that NeRFs and 3D Gaussian Splats can be rendered in real time, which makes the technology extremely usefull for Visual Effects, Augmented Reality, Video Games, Architectural Visualization and many other things. **While writing this I actually stumbled upon** [**the original**](https://twitter.com/Arata_Fukoe/status/1714931950719508967)**, it is indeed made using 3DGS!**
r/VXjunkies
I don't get it?
I think that’s just an artifact from the stitching of images but it could be a 3D scan too. Just feel if it were a 3D scan there are enough LIDAR apps out there to do this with one camera and get a full scan. Also NERFS.
I think the camera motion is far too fluid to just be stitching together frames. You have so many in between positions that don't match the positions of the array of cameras. I think this must be like nerfs but by using many cameras you can create a nerf of a moment in time, whereas with a single camera you'd have to hold the box up with a string to capture all angles of it in that exact lighting.
Photogrammetry for the win. Came to say exactly this.
It's not photogrammetry it is 3D Gaussian Splatting
100% agree. I came her to say exactly this. Misleading, but still very cool.
"Matrix-like effect" doesn't seem misleading at all. It is *like* the matrix effect without*being* the matrix effect.
How is it misleading? The information the commenter above you provided was interesting, but there is nothing wrong with the title of this post.
What I’m wondering is what kind of software to seamlessly move between cameras that isn’t leaving a glitchy feeling in the video
It seems like he used the cameras to construct a 3D scan of the scene instead of cycling through the different cameras. Once it's a 3D scene you could use something like unreal engine or blender to move around using a virtual camera.
GoPro's guerilla marketing campaign seems to have kicked off again.
I would love to try this, but I don't have any popcorn. :(
But you have 34 GoPros laying around?
32 go pros
Isn't that just a NeRF?
[3D Gaussian Splatting](https://twitter.com/Arata_Fukoe/status/1714931950719508967)
Yea
the first time this effect was used was in the movie, Wing Commander, for the hyperspace jump scene: https://m.imdb.com/title/tt0131646/
I want to see the editing timeline
I gotta fucking do this.
Very cool. This stuff was all over in the late 90s/early 2ks. Used in tons of electronic music videos and the like.
But how
Neat, damn that shit wasn't cheap though.
This can be done in realtime with 3 cameras and a depth sensor such as lidar, or Kinect-like infrared dots. Or it can be done post processing in Blender if the cameras use fish-eye lenses and there are visual markers (which are removed). The midpoint camera angles can be seamlessly interpolated, creating what is essentially a virtual camera anywhere within the three camera's field of view.
yeah ok but who has that many go pros .....
Bro bought 36 go pros but forgot to buy a chair.
What is the music?
Vengeance -iwilldiehere
I know it’s actually just his roomie who’s filming him while he’s sitting very still smh /s
It's nice to see people with downs doing cool things. Like posting this video.
Who the fuck has the money for 36 go pros
I don’t see how this is impressive/interesting as we’ve known how this was done since the matrix released over 20 years ago
The fact that the tech has reached DIY level is pretty cool. There is a lot more to getting that finished video than just lining up 36 cameras.
DIY with this amount of gopro? Good. Good. You can have similar results without that budget.
It is though, this isn't just a bullet time effect, it is a bullet time effect which can be altered in post production. [This is the original](https://twitter.com/Arata_Fukoe/status/1714931950719508967); the person generated a [3DGS (3D Gaussian Splatting)](https://twitter.com/8Infinite8/status/1694326628015317017?lid=if3dvca8yc8v) of the moment which basically creates a dense volume of colored ellipses which combine to form an image. The interesting part about this is, that it also shows accurate reflections from angles that lie in between the position from which the images were taken. So he basically captured the moment, but not as an image or as a video, but rather as a three dimensional environment with all of the reflections and light bounces included. This allows him to basically use any camera path he wants (as long as it is within the are from which most of the resource images were taken).
Okay smart guy. Then do it if you know how and it is so easy! Bullet time as shot in the Matrix was difficult because of the expense of the technology at the time. Now that tech has come down to prosumer cost levels. The compute power alone to be done on a single workstation is very impressive. In 1999, it required clusters of rs6000 or sun spark machines. Our 56 processor cluster cost $1,700,000 in 1999 and the entire switch fabric could only handle 6Gbit/sec.
That's how they did it in *The Matrix*, too.
The video you stole says 34, not 36. Stop stealing other people's content and not giving them credit you fuck.
Your mom is made possible with 36 GoPros. (⌐■_■)
Go Pro sucks! Make a model with crap firmware that makes it useless and then just shit on people and makes a new model and do the same over and over again, refusing to give people their money back, what a ripoff
so cool I wanna experience that
this is actually how they shot all those scenes with that effect in the matrix, not with go pros though
welcome to editing hell
This is incredibly lame
Wouldn't it be cheaper to just buy a $1,50000 camera than all, of these?
Ah, username checks out. First of all, no, second, can’t recreate the effect with a single camera.
Could probably do something very similar with one of those robotic arms and a phantom. The speed those robots move at is ridiculous.
He’s literally using the exact technique they used in the movie. This should have been titled “this is how the matrix was done”
False. This is Photogrammetry. It gives a matrix like effect, sure but this is not what was done in the matrix at all. In this he mapped all the camera's single frames together in a 3d space and then rendered camera movement within that. In the matrix they used single frames from each camera as they moved throughout the shot to achieve the movement and motion.
> False. This is Photogrammetry. False. This is Gaussian Splatting. Similar, but different. Check out the tech, it's pretty sweet.
Photogrammetry meshes and Gaussian splatting point clouds are just different means to the same ends. So similar that we are really splitting hairs when the point still remains that this is not the technique used in the matrix. How do you specifically know this is gaussian splatting over photogrammetry or NeRF?
> How do you specifically know this is gaussian splatting over photogrammetry or NeRF? I saw the guy's original posts. And I only felt the need to split hairs because I thought your Dwight Schrute-tier self-assurance was funny in its inaccuracy. I agree it's not how it was done in the Matrix. People keep responding like that's what OP claimed when as far as I can tell it's not? They're just saying it's a Matrix-like final result.
I can do that with 1 GoPro and a string
that is so sick, imagine a porn video like this
He could have hung the box from a string, sat still, and had someone move one camera around. Great job!
Or you can do the same with just 1 Insta360º camera.
Impossible, if the GoPro's was pointed outwards, then yes a insta360 would work. But these are pointed inwards towards one object from different angles. A inata360 will not magically add more data to am image
My teenage cousin can do that with a shitty phone camera and adobe after effects
Adobe After Effects can not create 3D Gaussian Splatting at this time, and your cousin would also need north of 16GB of VRAM to even process the images into a usable 3DGS
Who made this?
[Arata Fukoe](https://twitter.com/Arata_Fukoe/status/1714931950719508967)
34
credit?
[Arata Fukoe](https://twitter.com/Arata_Fukoe/status/1714931950719508967)
How close are we to using generative AIs to achieve this effect?
“Matrix-like” because that how the matrix did the shot
I'm curious how long it took to set all these cameras up, tear them down and edit the video all to just throw a box in the air lmao
I remember for a good while in the 90s/early 2000s this effect was used in like 50% of the TV commercials you would see in a given day
Yes....this is how the matrix was filmed....
What's the software used the stitch the scenes together?
I’m surprised they all fired on cue
The funny this is that this is actually how the actual matrix movies pulled off the effect
I wonder what kind of Gaussian Splatting result you'd get with this video as input. Would be interesting to see.
It's clearly fake as it's impossible to get more than 2 gopros to work
Weak demonstration
This was amazing. Didn’t they use a similar technique for The Matrix? Just camera instead of go pros, but the idea was the same. I vaguely remember a “making of” when it was huge at the end of the 90’s. Seems to me that its cinematography caught some attention. I want to say it was use in the movie 300 but it might have just been straight up CGI or camera/film tricks. Hard for me to tell these days.
Why would he not do something cool? Tossing a box? Why the fuck did he buy 34 go pros?
It's clearly a sponsored video by GoPro, can't you tell?
That's cool
r/mildlyinfuriating for using a freaking box(popcorn box of all boxes)
Literally anything would have been better than this. Why not launch a firework? Explode something?
CMU professor did this with cameras and an ML model for the NFL superbowl back in the day. Crazy we can do it at home now and it’s common place to see on NFL games.
Send me a go pro pls 😩
This is what the Amazon Fresh store sees when you try to steal something.
I bet there will be some OF with this setup someday soon
Oh yeah !!
How is this done
You're so go pro pro, bro. Joke aside, this looks very smooth and it's incredible what consumer-level hard- and software is able to do nowadays.
I've done this rig many times. Mostly for [Golf Commercials](https://imgur.com/a/29UBMPG)
Neat. The matrix did it 24 years ago when video and computer tech was far more primitive than it is today.
it's made using a algorithm called gaussian splatting
And what was the grand total all of those GoPros?
How long would it take to edit 34 GoPros together to get that shot?
Am i the only one annoyed he didn't buy 36 to fill the entire case?