I'm running the latest Adobe Illustrator on a Mac Pro with 8 cores and it's a large print image I'm creating with a drop shadow. It's taking several minutes to render the drop shadow on this system and I'm just curious if there is a setting to make it utilize all the cores, etc? Because this is reminding me of the 1990s.
I remember reading this article a while ago; not sure if the drop shadow creation runs into the same problem
Mordy Golding, worked at Adobe Systems
Vector graphics have their advantages and disadvantages. This, sadly, is one of the downsides. That's because vector graphics are drawn in a linear stacking order.
Take one simple example -- take a single layer with a single rectangle. Duplicate that layer 100 times. Even though only the top layer is visible, illustrator draws each rectangle from the bottom most layer to the top. Illustrator can't draw layer 50 until the previous 49 are done. This is unlike photoshop that is just concerned with which pixels are ultimately visible.
Extending the concept further. Say you had 4 cores. You could take a photoshop file and split it into a grid of 4 areas and tell each core to draw 1 area -- all at the same time. That's because a pixel in one grid has no impact whatsoever on pixels in another grid area. Since you can split a photo into multiple independent areas, you can assign multiple cores to render each area simultaneously.
However, in illustrator, each object is drawn in the order it appears in the stacking order. So if I split an image into a grid of four, I still have to build all art object by object no matter what grid it is in. Meaning all 4 cores would still have to wait until all objects were drawn before rendering.
What this means is that only some kinds of functions can support multi-core functionality in Illustrator. For example, if you print a large file, illustrator will hand the print spooling off to another core and return you back to your document to continue working immediately. But sadly, this isn't possible for speeding up linear tasks like drawing/rendering art.
This doesn't make much sense to me. Any program no matter what it does have some operations to complete and those operations can be split for efficiency, even if that means to use 1 core to manage this splitting.
Cutting the screen and give a part to each core to render do make sense but still why the vector rendering can't be multisourced?
So if we have that 50 vector objects to render the program should send 1 object to each core and not have 1 core to render each one of them on it's own.
We are in 2018, I have seen other programs utilize more cores for vector graphics so it is possible to be done. I have my 6 core CPU for ages now but most of the cores just stay there idle. This is unacceptable, especially since we are in a somewhat stagnant CPU environment that up until now that AMD got those ryzens out, we could barely get 5-10% better CPUs from Intel, so it is not like we can just upgrade the hardware (which financially doesn't make much sense if the software can be better).
Please whoever is listening from adobe, I know those programs are old, but do something to make them faster. I don't know what, rewrite them if you have to make them work better in modern hardware, I know you can.
Ok, here's my build:
CPU: Intel Xeon 4669v3 18-Core
GPU: Titan V
RAM: DDR4 64GB
SDD: M.2 NVMe 500GB
I do genomics. I do prefer Illustrator for editing figures, and based on my build, I should be able to select and delete a million objects rather quickly, right? Nope. The struggle is real. Even with GPU acceleration! Maybe I need a Quadro (or maybe I just need to spend more time coding my figures, lol).
I love Illustrator, but damn, I hate the bottlenecks!
*Adobe should really add a plugin to maximize Illustrator peformance based on user build.
Nope (high end) Quadro doesn't make it much faster. The upside is I can work on other crap while I wait. Even with Illustrator locked up working hard, it only gets my cpu usage up to 10% adding for instance a really tight scribble effect to a very large image trace. Its not the hardware it's the algorithms.
Equipment: Dell t7920 Workstation Dual Xeon Gold 20 core = 40 cores total (80 logical cores) 256gb ram all nvme drives, Nvidia Quadro gv100 32gb video ram.
Now, 2 years after this post, Still looks like only 1x core utilised.
Flattening a 10m x 3m artboard to raster, hangs not responding but still only utilising 1 core (8% CPU) and 5GB of ram (out of 128GB) with Zero use of GPU on the Quadro A4000. Massive computer just idling with 1 Cylinder burning out while the rest just do nothing...such a pity. And unlike your example, It does drastically slow up all other apps, it's like maxxing one core forces task manager to throttle the others, meaning parallel work on other programs grinds to a near halt...c'mon Adobe 😞
Mordy Golding's explanation from 2015 is correct. I'm not aware any of these kinds of applications, be it Illustrator, CorelDRAW, Affinity Designer, or even defunct applications like Freehand, have ever been able to spread the task of organizing the stacking order of vector objects in a layout across multiple CPUs or CPU cores. Some functions can be off-loaded to the GPU, such as animated zoom. I don't know if Illustrator or similar rival applications can assign multiple open documents to other cores. Nevertheless, the paradigm with vector drawing programs is a very linear one. I'm guessing it's not easy at all for software engineers to turn that into a multi-threaded affair; it would have happened by now. They've had over 30 years to figure it out.
In regards to converting a 10 meter by 3 meter artboard to raster format, that crashing problem brings up multiple questions. The first trouble-shooting measure is making sure various pieces of hardware and software are properly updated (and that there are no known bugs with those updates). What is the document resolution? I work in the sign industry and see a lot of ill-prepared, client provided artwork. One common mistake clients make is using a high resolution setting meant for a small page size for something to be printed large format. 300ppi for raster-based effects is acceptable for a letter-sized page. It's overkill for large format use. I'll use 72ppi for full size vehicle wraps and 25ppi for full size billboards. Bad settings can bog down even the most powerful of computers.
Very often if I want to convert an entire Illustrator-based layout to pixel-based form I'll often copy/paste the artwork into a target document within Adobe Photoshop. Usually I'll take the hybrid approach using Illustrator to create the final document, holding both raster-based and vector-based elements.
Getting into the weeds of workflow, utilisation of apropriate resolutions for working and bouncing between the better applications for each output aside there are glaring bandwidth limitations that come baked into the current versions that have had many years to fix.
Would love to hear how anyone can have DXF lateral switching between 3rd party apps with anything beyone 2Mx2M artboard, the export is blocked for anything above that. Why the throttle isn't on the document complexity, not its overall dimensions baffles me...I too work in the large-format print/design space and being able to place and export critical dimensional data is more critical at the larger end than the smaller.
300ppi isn't overkill when instead of a banner you have a very large integrated design, that is larger than 2mx2m and it must have sharp text and linework to be viewed at less-than-arms-length. Having to dive into composide documents with many embedded tiles of artboards gets impossible when the art is supposed to be connected...
The way you maintain sharp text and line work is by leaving those elements as vector based objects. Converting those vector-based elements to pixels severely degrades their quality.
Applications like Illustrator and InDesign can support layouts with a mix of raster-based and pixel-based objects. The vector-based objects will print at the maximum resolution level the printer can support, which is typically way above 300ppi. The raster-based elements in the layout, be it photographs, live drop-shadows or whatever do not need to be 300ppi on a large banner. Some elements, such as photographs have only so much real, native detail in them. Up-rezzing them adds zero new detail, contrary to the idiots who write all those CSI: Miami style TV shows where every computer has the magic "enhance" button to find hidden detail in a blurry photo. A setting of 150ppi is acceptable for posters and small banners. The 72ppi level is good for large banners, vehicle wraps and modest sized sign backgrounds. There is a lot of push and pull between viewing distance of the finished product and keeping file sizes more manageable.
manageable filesize vs complexity raster/vector is always a balance, however due to any number of factors that can be influenced down the chain of production (mesh errors, overprint errors, spot colour loss, alphas intefering) unless the designer has flattened that art themselves, based on their stack order of elements, leaving vectors flattening to the 3rd party is unfairly expecting their calibration to match your expectation. It's all well and good to have some proofing done on a large quantity of one design from an offset, but for large artworks, integrated architectural pieces, one-off event pieces, there is little opportunity to get more that one bite at the apple.
The bandwidth limitations of artboard sizes and cross-application export options across the suite still leaves much to be desired IMHO. no DXF/DWG export beyond 2m artboard regardless of complexity, or in most cases, lack of it in AI for one - it means constant scaling when moving between CAD and AI - anyone working in IRL space design must be as pissed off as I find myself at this!
Also no 1:1 PSD><AI/EPS transfers...there's constand dimensional drift that makes bouncing work between apps feel like i'm recording a song off the radio on a TDK instead of actually moving raw data. inability to export a flat JPG at 300DPI over 1.2m...means everything has to be sliced up into tiles, and then when geometry doesn't stay true alignment is manual...try doing that over a 3m x 20m wall space! 😞
Sharing Illustrator AI files (or PDF files generated by Illustrator) containing a mix of raster and vector-based elements does not have to require the user on the receiving end to do any flattening work. Pure vector-based objects with no effects applied to them do not require any flattening or expanding work. And even if certain live effects have been applied to vector objects, be it something as simple line strokes or more complex things like free-form gradient fills, it shouldn't matter either. If the AI file is opened by another user's copy of Illustrator the effects will be preserved, provided both people are using the same version of Illustrator. Live Illustrator-based effects will be retained in Illustrator-generated PDFs.
The only real hazard about sharing AI files with another Illustrator user is effects generated by plugins. Those have to be flattened, expanded or released unless the user on the other end has the same plugins.
Regarding DXF import/export, Illustrator is not a CAD program. There are many fundamental differences between Bezier curve based vector graphics applications and CAD software that often draws objects as a bunch of disconnected line segments or maybe arcs. Nothing is easy about moving assets between those two very different paradigms. Illustrator will never have the same artboard size capabilities of a CAD/CAM/CAE application. I don't understand why Illustrator has had so many issues lately with importing or exporting DXF files. I sidestep that business by having anyone wanting to send me AutoCAD-generated artwork (such as a building elevation) as a PDF instead of a DXF.
Regarding 1:1 size transfers between Photoshop and Illustrator, All Photoshop documents are based on whole pixel numbers regardless if the document rulers are set up in inches, centimeters, etc. The reason you see "dimensional drift" between the two applications is because vector objects are not limited to whole pixel coordinates. If you want the sizing to be precisely 1:1 between an Illustrator document or object and a Photoshop layout both have to be set up precisely. I have no problem doing this for either large format printed projects or graphics tailored for LED-based variable message signs.
I agree that Illustrator could be much faster than it is now. I use it on my personal MacBook Pro and a Windows desktop and laptop at work. While both Windows installations perform slower than the Mac installation all have speed issues. I don't know if the answer is for it to be modified or re-written to use multiple threads or for it to utilize operating system tools like Metal for Apple operating systems, but whatever the solution is, I hope that Adobe will improve this soon. Waiting several minutes to move objects, render effects and save files in Adobe Illustrator seems excessive.