Copy link to clipboard
Copied
Hello everyone, I want to use a script to align all colored layers to gray layers according to their shape outlines, and some layers are rotated 180°. Can this function be achieved through a script? I'm not very familiar with scripts. I used ChatGPT to write the scripts I need, but this time, I don't know if this function can be achieved through script logic.
Your files have large linear dimensions. The most time-consuming operations are the selection expand operation (used when dividing a gray layer into fragments) and layer transformation operations (used to determine the optimal position of a figure and at the final stage of alignment). It is impossible to get rid of them. Changing the remaining commands will have an insignificant effect on the overall performance.
I added the constant SCALE = 0.25 to the code - this is the scale value by which t
...Copy link to clipboard
Copied
Photoshop does not perceive the shapes on the merged layer (03) as separate objects. Yes, they can be separated by a script using a transparency mask, but then the question arises of how to find a colored object corresponding to each gray shape:
The situation with flipping the image by 180 degrees already relates to image recognition, Photoshop currently does not provide tools for effective recognition and classification of objects. In theory, the problem can be solved algorithmically, but it is unlikely that false positives will be eliminated if the objects are similar.
Perhaps you should describe the problem in more detail. Do the shapes of the objects change? Are the colored objects always located above the gray ones and match them in color? Is user participation acceptable for manual matching of objects (for example, by selecting two layers in the layers palette) and other points that will help us find a solution.
Copy link to clipboard
Copied
Thanks for your reply! The color layer can ensure that it is always on top of the gray layer, but the color I can only ensure that the gray layer is always gray. The color layer will be various colors, and I often need to carry out this step with files of various different shapes of layers. This is why I need a script to complete it, and I hope to use a script to complete it. If manual matching is required, this will be very cumbersome.
Copy link to clipboard
Copied
I write Photoshop scripts for living and what you are trying to do is just not possible. The only "align" tool available in Photoshop is "Auto-Align Layers" and for that to work you'd need, at least temporarly, seperate all shapes and give them backgroud so the layers are not transparent, but even after all that the align bit is not accurate:
Copy link to clipboard
Copied
Script works with the following restrictions:
Script here: Aug 13, 2025
Copy link to clipboard
Copied
I'm guessing im taking my words back, well done @jazz-y ! Another restritcion would be exact same "angle" as your script is not adjusting rotation and only doing vertical flip. Also if there was any asimetrical pattern within the shapes it wouldnt be aligned correctly, but that might be irrelevant. Out of curiosity, why are you using paths there?
Copy link to clipboard
Copied
The list of limitations is, of course, much larger, but I hope that @mushroom bomb will use it taking into account the original description of the task.
I tested the script including with textured layers, everything worked fine, but, indeed, certain color combinations can lead to incorrect operation (since the orientation determination is based on the assessment of the degree of lightening of the layer in the subtract mode)
I create a selection by opacity mask, then convert them into a path, then create a separate selection for each segment of the path to divide the layer with gray shapes into separate objects (which allows us to calculate the boundaries and geometric center of each gray shape)
Copy link to clipboard
Copied
Wow! Thank you very much for your help. To me, this code is like a person lost in the desert discovering a river. It is really helpful to me. There are some limitations, but I will try to modify it myself using ChatGPT first, because I need to group the separate layers and then align them later, and the overlap problem should also be taken into consideration in my actual application.
Copy link to clipboard
Copied
Your code gave me great inspiration and ideas, but after trying it, I was ashamed to find that the geometric center of the color graphics must not exceed the boundary of the corresponding gray layer along the X-axis, and overlap with each other. These two restrictions are somewhat fatal to me in the actual application process, because when I actually use the code, I align the groups to the gray layer, and the overlap and position must be chaotic. It is very regrettable, but I am also very grateful.
Copy link to clipboard
Copied
Attach a file with the most complex case possible and show how exactly the layers should be arranged after the script has finished running. It may be possible to make minor corrections.
Copy link to clipboard
Copied
As shown in the attached file, this is my file environment in actual applications. I will have many such files, and each file will be a little different (especially the patterns inside), but I need to align them on the gray layer (for convenience, so I made all files into gray layers for easy alignment). It should be noted that these groups cannot be merged. I don’t know if this can be achieved through a script.I have uploaded my files and the finished diagram. Thank you very much for your help.
Copy link to clipboard
Copied
Such scripts are not universal, but are highly dependent on the specific workflow (the initial state of the document). Sometimes changing the smallest detail literally changes everything, so we expect you to immediately describe the real task, and not its simplified version 😉
Let's change the workflow a little and look at the new conditions:
* I have removed all rounding of numeric values from the script code, but this does not mean that the result always will be pixel perfect - the figures are still slightly different and it is very difficult to take this into account, in addition, groups and layers have different bounding boxes, which brings additional error when calculating offsets
Copy link to clipboard
Copied
1. I've confirmed that the layer with the shape placement template is always at the bottom of the Layers panel. 2. The layers I need to align are always at the top, organized as top-level groups, with the same number of templates. 3. After testing, I found that when I duplicated all the groups (creating two identical sets of groups, but ensuring the same gray layer), the alignment was incorrect. I'm not sure if this is due to duplicate groups or overlapping groups, as I couldn't actually pin each file to a fixed position before running the script (meaning, the groups in each file were randomly and chaotically positioned, and if I had two sets of groups, there would be some overlap). I've attached a file so you can see the actual situation. I'd really appreciate your help. Thank you very much!
Copy link to clipboard
Copied
As I mentioned above, Photoshop does not provide tools for image recognition. I don't know of a way to efficiently find and match each shape to a pattern algorithmically (especially if it can be flipped vertically, horizontally, or both at once). Yes, in theory we can match shapes not by distance, but by the number of pixels inside the selection (area), selecting the best overlap option for each and discarding them as the matching is successful, but:
That is, the probability of getting a good result and the probability of getting patterns chaotically scattered around the document are approximately the same - it just does not make sense. In its current formulation, the problem cannot be solved using Photoshop - you either need to fundamentally change the workflow to be able to match objects algorithmically, or align them manually.
UPD: I will try to make a comparison by area (first of all, I am interested in what will happen), but taking into account the above, success is unlikely.
Copy link to clipboard
Copied
Not sure how much original "grey" shapes is needed but if matching grey and coloured are always exact same shape-wise, maybe its easier to just discard current grey layer and create new using selections made from colored layers?
Copy link to clipboard
Copied
As I understand it, this is a layout for printing/cutting fabric. That is, the arrangement of elements on the layer with gray objects is not accidental and is important for further work.
Overall, the idea of selecting objects by area taking into account proportions turned out to be quite workable (although I admit that there may be errors in the layout). I will check the code for edge cases and publish it.
Copy link to clipboard
Copied
@mushroom bomb, I slightly modified the code so that the layers are compared based on the area (number of pixels in the layer) and the layer's proportions.
I wanted to apply a transform to align the edges and get a perfect offset, but in your last example the objects simply have different shapes:
This gave a predictable result on all the examples you suggested. I don't see any point in continuing to improve this code, I hope this is a good starting point for your tasks. Good luck!
Script here: Aug 13, 2025
Copy link to clipboard
Copied
Copy link to clipboard
Copied
Your files have large linear dimensions. The most time-consuming operations are the selection expand operation (used when dividing a gray layer into fragments) and layer transformation operations (used to determine the optimal position of a figure and at the final stage of alignment). It is impossible to get rid of them. Changing the remaining commands will have an insignificant effect on the overall performance.
I added the constant SCALE = 0.25 to the code - this is the scale value by which the document is temporarily reduced in the process of calculating the parameters necessary for moving layers, that is, 25% of the original (in step 3, the document returns to its original state, and all coefficients are divided by the scale value). You can change this value, the smaller it is, the faster the script works (but the positioning accuracy is lower due to rounding errors), the larger it is, the higher the accuracy (1 is 100%, there is no point in setting it higher than one)
Code updated 2025-08-14 (3)
#target photoshop
const EXPAND_BY = 20, // expand selection from the opacity mask to minimize errors when separating objects
WXHAccuracy = 100, // multiplier for converting WxH to integer (affects rounding accuracy)
WXHTolerance = 0.15, // deviation from the proportion value, below which objects are still compared
SCALE = 0.25;
var apl = new AM('application'),
doc = new AM('document'),
lr = new AM('layer'),
pth = new AM('path');
if (apl.getProperty('numberOfDocuments')) {
var hst = activeDocument.activeHistoryState,
layers = [], deltas = [];
doc.setGlobalFxState(false);
try {
activeDocument.suspendHistory('Checking layers', 'main(1)');
if (layers.frames.length == layers.objects.length) activeDocument.suspendHistory('Find optimal position', 'main(2)');
activeDocument.activeHistoryState = hst;
activeDocument.suspendHistory('Align layers', 'main(3)');
} catch (e) { activeDocument.activeHistoryState = hst; alert(e) }
doc.setGlobalFxState(true);
function main(step) {
switch (step) {
case 1: doForcedProgress('Step 1/3: Get initial bounds of layers', 'step1()'); break;
case 2: doForcedProgress('Step 2/3: Find optimal position for layers', 'step2()'); break;
case 3: doForcedProgress('Step 3/3: Align layers', 'step3()'); break;
}
function step1() {
layers = findLayers();
doc.setScale(SCALE * 100);
if (layers.frames.length) {
changeProgressText('Step 1/3: Preparing to split gray shapes')
layers.frames = grayFramesToLayers(layers.frames);
if (layers.frames.length == layers.objects.length) {
var docRes = doc.getProperty('resolution');
for (var i = 0; i < layers.objects.length; i++) {
lr.selectLayer(layers.objects[i].id)
layers.objects[i] = describeLayer(layers.objects[i].id, doc.getProperty('width') * docRes / 72, doc.getProperty('height') * docRes / 72)
}
} else throw new Error('The number of gray shapes found does not match the number of objects found!')
} else throw new Error('No shape layers found!')
}
function step2() {
var result = alignLayers(layers.frames, layers.objects);
for (var i = 0; i < result.length; i++) {
changeProgressText('Step 2/3: Find optimal position for layers ' + Math.round(((i + 1) / result.length) * 100) + '%')
deltas.push({ layer: result[i], transform: checkOrientation(result[i]) })
};
}
function step3() {
for (var i = 0; i < deltas.length; i++) {
lr.selectLayer(deltas[i].layer.id)
changeProgressText('Step 3/3: Align layer: ' + lr.getProperty('name', deltas[i].layer.id))
lr.transform(deltas[i].transform, deltas[i].layer.center[0] / SCALE, deltas[i].layer.center[1] / SCALE)
lr.move(deltas[i].layer.dX / SCALE, deltas[i].layer.dY / SCALE)
};
}
}
}
function findLayers() {
var layers = getLayersCollection(),
result = {};
result.frames = [];
result.objects = [];
for (a in layers) {
changeProgressText('Step 1/3: Get initial bounds of layers ' + Math.round((parseInt(a) + 1) / (layers.length) * 100) + '%')
if (layers[a].length) {
lr.selectLayer(layers[a].id)
lr.merge();
var id = lr.getProperty('layerID');
lr.setVisiblity(id, 'hide')
result.objects.push({ id: id })
} else result.frames.push({ id: layers[a].id })
}
return result;
}
function getLayersCollection() {
var doc = new AM('document'),
lr = new AM('layer'),
indexFrom = doc.getProperty('hasBackgroundLayer') ? 0 : 1,
indexTo = doc.getProperty('numberOfLayers');
return layersCollection(indexFrom, indexTo)
function layersCollection(from, to, parentItem, group) {
parentItem = parentItem ? parentItem : [];
for (var i = from; i <= to; i++) {
var layerSection = lr.getProperty('layerSection', i, true);
if (layerSection == 'layerSectionEnd') {
i = layersCollection(i + 1, to, [], parentItem)
continue;
}
if (lr.getProperty('background', i, true)) continue;
var properties = {};
properties.id = lr.getProperty('layerID', i, true);
if (layerSection == 'layerSectionStart') {
for (o in properties) { parentItem[o] = properties[o] }
group.push(parentItem);
return i;
} else {
parentItem.push(properties)
}
}
return parentItem
}
}
function alignLayers(a, b) {
var result = [];
do {
var cur = b.shift(),
target = findNearest(cur, a);
lr.selectLayer(cur.id);
lr.move((target.center[0] - cur.center[0]) - cur.offsetX, (target.center[1] - cur.center[1]) - cur.offsetY)
result.push({ id: cur.id, bounds: target.bounds, dX: target.center[0] - cur.center[0], dY: target.center[1] - cur.center[1], center: cur.center, targetId: target.id })
} while (b.length)
return result;
}
function findNearest(a, b) {
var dist = [];
for (var i = 0; i < b.length; i++) {
if (b[i].found) continue;
if (Math.abs(b[i].WxH - a.WxH) > WXHTolerance) continue;
dist.push({ i: i, dist: Math.abs(b[i].square - a.square) });
}
if (!dist.length) throw new Error('Pair for layer ' + lr.getProperty('name', a.id) + ' not found!');
dist.sort(function (a, b) { return a.dist > b.dist ? 1 : -1 })
b[dist[0].i].found = true;
return b[dist[0].i];
}
function checkOrientation(o) {
var transformMatrices = [[100, -100], [-100, 100], [100, -100], [-100, 100]],
transform = [[100, -100], [-100, -100], [-100, 100], [100, 100]],
result = [];
lr.selectLayer(o.id, true)
lr.setBlendingMode('blendSubtraction');
for (var i = 0; i < transformMatrices.length; i++) result.push(findDifference(o, transformMatrices[i], transform[i]));
lr.delete(o.id);
lr.delete(o.targetId);
result.sort(function (a, b) { return a[0] > b[0] ? 1 : -1 })
return result[0][1]
function findDifference(o, transformMatrices, transform) {
lr.removeSelection()
lr.transform(transformMatrices)
lr.selectLayer(o.targetId)
lr.selectTransparency()
var c = getAverageColor(doc.getProperty('histogram'))
lr.selectLayer(o.id, true)
return [c, transform]
}
}
function grayFramesToLayers(l) {
var result = [];
for (var a in l) isolateLayers(l[a].id, lr.getProperty('name', l[a].id), result)
return result;
function isolateLayers(id, title, result) {
lr.selectLayer(id)
lr.selectTransparency();
lr.expandSelection(EXPAND_BY * SCALE);
pth.workPathFromSelection(1);
var pathComponents = (pth.getProperty('pathContents')).getList(stringIDToTypeID('pathComponents'));
if (pathComponents.count > 1) {
for (var i = pathComponents.count - 1; i >= 0; i--) {
changeProgressText('Step 1/3: Split gray shapes to layers ' + Math.round((pathComponents.count - 1 - i) / (pathComponents.count - 1) * 100) + '%')
pth.workPathFromDesc(pathComponents.getObjectValue(i));
pth.selectionFromWorkPath();
lr.selectLayer(id)
if (lr.layerViaCut()) { result.push(describeLayer(lr.getProperty('layerID'))) }
}
pth.delete();
} else { result.push(describeLayer(id)) }
}
}
function describeLayer(id, docW, docH) {
var o = {},
bounds = lr.descToObject(lr.getProperty('boundsNoEffects', id));
o.id = id;
o.width = bounds.right - bounds.left
o.heigth = bounds.bottom - bounds.top
o.bounds = bounds
o.center = [bounds.left + o.width / 2, bounds.top + o.heigth / 2]
o.found = false;
if (docW && docH) {
o.offsetX = o.bounds.left < 0 ? 0 - (o.bounds.left) : (o.bounds.right > docW ? docW - (o.bounds.right) : 0);
o.offsetY = o.bounds.top < 0 ? 0 - (o.bounds.top) : (o.bounds.bottom > docH ? docH - (o.bounds.bottom) : 0);
if (o.offsetX != 0 || o.offsetY != 0) lr.move(o.offsetX, o.offsetY, 0, 0)
}
lr.selectTransparency()
o.square = countPixels(doc.getProperty('histogram'))
o.WxH = Math.round((o.width / o.heigth) * WXHAccuracy) / WXHAccuracy
return o
}
function getAverageColor(h) {
var n = p = 0;
for (var i = 0; i < h.count; i++) {
n += h.getInteger(i)
p += h.getInteger(i) * i
}
return p / n
}
function countPixels(h) {
var s = 0;
for (var i = 0; i < h.count; i++) {
s += h.getInteger(i)
}
return s
}
function AM(target) {
var s2t = stringIDToTypeID,
t2s = typeIDToStringID;
target = s2t(target)
this.getProperty = function (property, id, idxMode) {
property = s2t(property);
(r = new ActionReference()).putProperty(s2t('property'), property);
id ? (idxMode ? r.putIndex(target, id) : r.putIdentifier(target, id))
: r.putEnumerated(target, s2t('ordinal'), s2t('targetEnum'));
return getDescValue(executeActionGet(r), property)
}
this.hasProperty = function (property, id, idxMode) {
property = s2t(property);
(r = new ActionReference()).putProperty(s2t('property'), property);
id ? (idxMode ? r.putIndex(target, id) : r.putIdentifier(target, id))
: r.putEnumerated(target, s2t('ordinal'), s2t('targetEnum'));
return executeActionGet(r).hasKey(property)
}
this.descToObject = function (d) {
var o = {};
for (var i = 0; i < d.count; i++) {
var k = d.getKey(i)
o[t2s(k)] = getDescValue(d, k)
}
return o
}
this.selectLayer = function (id, mode) {
(r = new ActionReference()).putIdentifier(s2t('layer'), id);
(d = new ActionDescriptor()).putReference(s2t('null'), r);
d.putBoolean(s2t('makeVisible'), mode ? mode : false)
executeAction(s2t('select'), d, DialogModes.NO);
}
this.selectTransparency = function () {
(r = new ActionReference()).putProperty(s2t('channel'), s2t('selection'));
(d = new ActionDescriptor()).putReference(s2t('null'), r);
r1 = new ActionReference();
r1.putEnumerated(s2t('channel'), s2t('channel'), s2t('transparencyEnum'));
d.putReference(s2t('to'), r1);
executeAction(s2t('set'), d, DialogModes.NO);
}
this.layerViaCut = function () {
try {
executeAction(s2t("copyToLayer"), d, DialogModes.NO);
} catch (e) { return false }
return true;
}
this.expandSelection = function (pixels) {
(d = new ActionDescriptor()).putUnitDouble(s2t("by"), s2t("pixelsUnit"), pixels);
d.putBoolean(s2t("selectionModifyEffectAtCanvasBounds"), false);
executeAction(s2t("expand"), d, DialogModes.NO);
}
this.removeSelection = function () {
(r = new ActionReference()).putProperty(s2t('channel'), s2t('selection'));
(d = new ActionDescriptor()).putReference(s2t('null'), r);
d.putEnumerated(s2t('to'), s2t('ordinal'), s2t('none'));
executeAction(s2t('set'), d, DialogModes.NO);
}
this.workPathFromSelection = function (tolerance) {
(r = new ActionReference()).putClass(s2t("path"));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
(r1 = new ActionReference()).putProperty(s2t("selectionClass"), s2t("selection"));
d.putReference(s2t("from"), r1);
d.putUnitDouble(s2t("tolerance"), s2t("pixelsUnit"), tolerance);
executeAction(s2t("make"), d, DialogModes.NO);
}
this.selectionFromWorkPath = function () {
(r = new ActionReference()).putProperty(s2t("channel"), s2t("selection"));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
(r1 = new ActionReference()).putProperty(s2t("path"), s2t("workPath"));
d.putReference(s2t("to"), r1);
executeAction(s2t("set"), d, DialogModes.NO);
}
this.workPathFromDesc = function (desc) {
(r = new ActionReference()).putProperty(s2t('path'), s2t('workPath'));
(d = new ActionDescriptor()).putReference(s2t('target'), r);
(l = new ActionList).putObject(s2t('pathComponent'), desc);
d.putList(s2t('to'), l);
executeAction(s2t('set'), d, DialogModes.NO);
}
this.merge = function () {
executeAction(s2t("mergeLayers"), new ActionDescriptor(), DialogModes.NO);
}
this.delete = function (id) {
var r = new ActionReference();
if (id) r.putIdentifier(target, id) else r.putEnumerated(target, s2t('ordinal'), s2t('targetEnum'));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
executeAction(s2t("delete"), d, DialogModes.NO);
}
this.move = function (dX, dY) {
(r = new ActionReference()).putEnumerated(s2t("layer"), s2t("ordinal"), s2t("targetEnum"));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
(d1 = new ActionDescriptor()).putUnitDouble(s2t("horizontal"), s2t("pixelsUnit"), dX);
d1.putUnitDouble(s2t("vertical"), s2t("pixelsUnit"), dY);
d.putObject(s2t("to"), s2t("offset"), d1);
executeAction(s2t("move"), d, DialogModes.NO);
}
this.setBlendingMode = function (mode) {
(r = new ActionReference()).putEnumerated(s2t("layer"), s2t("ordinal"), s2t("targetEnum"));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
(d1 = new ActionDescriptor()).putEnumerated(s2t("mode"), s2t("blendMode"), s2t(mode));
d.putObject(s2t("to"), s2t("layer"), d1);
executeAction(s2t("set"), d, DialogModes.NO);
}
this.transform = function (transform, dX, dY) {
(r = new ActionReference()).putEnumerated(s2t("layer"), s2t("ordinal"), s2t("targetEnum"));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
if (dX != undefined) {
d.putEnumerated(s2t("freeTransformCenterState"), s2t("quadCenterState"), s2t("QCSIndependent"));
(d1 = new ActionDescriptor()).putUnitDouble(s2t("horizontal"), s2t("pixelsUnit"), dX);
d1.putUnitDouble(s2t("vertical"), s2t("pixelsUnit"), dY);
d.putObject(s2t("position"), charIDToTypeID("Pnt "), d1);
(d2 = new ActionDescriptor()).putUnitDouble(s2t("horizontal"), s2t("pixelsUnit"), 0);
d2.putUnitDouble(s2t("vertical"), s2t("pixelsUnit"), 0);
d.putObject(s2t("offset"), s2t("offset"), d2);
} else {
d.putEnumerated(s2t("freeTransformCenterState"), s2t("quadCenterState"), s2t("QCSAverage"));
}
d.putUnitDouble(s2t("width"), s2t("percentUnit"), transform[0]);
d.putUnitDouble(s2t("height"), s2t("percentUnit"), transform[1]);
d.putBoolean(s2t("linked"), true);
d.putEnumerated(s2t("interpolation"), s2t("interpolationType"), s2t("bicubic"));
executeAction(s2t("transform"), d, DialogModes.NO);
}
this.setScale = function (width) {
(d = new ActionDescriptor()).putUnitDouble(s2t("width"), s2t("percentUnit"), width);
d.putBoolean(s2t("scaleStyles"), true);
d.putBoolean(s2t("constrainProportions"), true);
d.putEnumerated(s2t("interpolation"), s2t("interpolationType"), s2t("bicubicSharper"));
executeAction(s2t("imageSize"), d, DialogModes.NO);
}
this.setGlobalFxState = function (layerFXVisible) {
(r = new ActionReference()).putProperty(s2t("property"), s2t("layerFXVisible"));
r.putEnumerated(s2t("document"), s2t("ordinal"), s2t("targetEnum"));
(d = new ActionDescriptor()).putReference(s2t("null"), r);
(d1 = new ActionDescriptor()).putBoolean(s2t("layerFXVisible"), layerFXVisible);
d.putObject(s2t("to"), s2t("layerFXVisible"), d1);
executeAction(s2t("set"), d, DialogModes.NO);
}
this.setVisiblity = function (id, mode) {
(r = new ActionReference()).putIdentifier(s2t('layer'), id);
(d = new ActionDescriptor()).putReference(s2t('target'), r);
executeAction(s2t(mode), d, DialogModes.NO);
}
function getDescValue(d, p) {
switch (d.getType(p)) {
case DescValueType.OBJECTTYPE: return (d.getObjectValue(p));
case DescValueType.LISTTYPE: return d.getList(p);
case DescValueType.REFERENCETYPE: return d.getReference(p);
case DescValueType.BOOLEANTYPE: return d.getBoolean(p);
case DescValueType.STRINGTYPE: return d.getString(p);
case DescValueType.INTEGERTYPE: return d.getInteger(p);
case DescValueType.LARGEINTEGERTYPE: return d.getLargeInteger(p);
case DescValueType.DOUBLETYPE: return d.getDouble(p);
case DescValueType.ALIASTYPE: return d.getPath(p);
case DescValueType.CLASSTYPE: return d.getClass(p);
case DescValueType.UNITDOUBLE: return (d.getUnitDoubleValue(p));
case DescValueType.ENUMERATEDTYPE: return t2s(d.getEnumerationValue(p));
default: break;
};
}
}
Copy link to clipboard
Copied
Thank you for your help. After testing the SCALE parameter, I found that it seems feasible to achieve 0.5-0.1. Maybe I need to use more files to test. Thank you very much again for your help!
Copy link to clipboard
Copied
I just executed a script on another file with a different shape and the same layer structure. Why did the script report an error of "undefined - not an object"? What’s the problem with this step? Basically, each of my files will have a different shape. All I can guarantee is that the number and shape of the gray layers and groups are the same.
Copy link to clipboard
Copied
Hi! The problem is here:
As I wrote above, the script compares layers by the area they occupy on the sheet. It is clear that figures of different shapes can have similar areas. To reduce the number of false positives, I additionally compare the layer proportions (Width/Height), the difference in the layer proportions should not be more than WXHTolerance = 0.15. I expected that this would be enough for most cases, but the slight increase in size due to the grey outline makes a big difference in the WxH aspect ratio:
5348/366 = 14,61
5375/392 = 13,71
14,61-13,71 = 0,9 (which is much higher than WXHTolerance = 0.15). Yes, we could set WXHTolerance = 1, but that would cause errors when matching other layers.
I added a function to disable all effects in the document before starting the calculations, and also added an error message in case the pair for a specific layer is not found. I will not republish the code (it takes up a lot of space), update code in this message: Aug 13, 2025
Copy link to clipboard
Copied
Now there is a strange problem. Some layer groups, after alignment, will have no regular mirror rotation. I tried to modify the WXHTolerance value, and also tried to turn off all layer style effects first, but there are still some confusions, and they are all small pieces. Is this problem due to an error in recognizing 180°, or is it because the script rotates the file mirror after size recognition to match the size?
Copy link to clipboard
Copied
attach this file, I'll see what's going on
Copy link to clipboard
Copied
Find more inspiration, events, and resources on the new Adobe Community
Explore Now