Skip to main content
Known Participant
August 15, 2025

Why I see dark fringe/halo in scaled down RGBA imaged in Designer and not in any other soft?

  • August 15, 2025
  • 13 replies
  • 526 views

I render 16 bit denoised beauty AOV or normal AOV  of small plants in Octane or Cycles   and see absolutly perfect fringe /halo free  image around alpha edges when I need to downsale  a render 2048x2700 sized  to perfectly  2kx2k  in Photoshop /affinity Photo/ Gimp / Krita

.       In Designer it's  ok if no scaling is involved at all .  But once initial image not perfectly x2  or I need to tweak the scale  I see those pesky fringes all over .   

 

If I set  premultiplied alpha blending it's a glowing white halo, if it's straight  . it's those dark fringes . Never an issue in Photoshop.      Much to my surprise it's mostly ok In Painter too . not as perfect as in Photoshop but still fine.    Does anyone know a workaround for Designer ?    The only way I managed  to find in Designer is to do it with evry color image in linear gamma 32 bit float blending  and its a hell  lots of extra "too slow/ 99 %memory" pain.    

13 replies

Known Participant
August 17, 2025

Nope , it's not working . Looks like the subtle fringe is not from blending math at all .   Since in exactly power 2 bitmap scale there is non at all.  It's from balck backround in rendered images where alpha is zero that starts to bleed in edges when we scale bitmaps.   I have no idea how Photoshop and like avoid it  but    the only remedy I found so far  is "solidifying" RGB part  like what solidify filter does in Photoshop .   And IMO we need this as an option when drop image with alpha. in. 

 

Tp be honest even simply solidify is not enough since  AI denoisers in modern renderers  sort of add some random color blotches in those un-premultiplied alpha edges .  it should be something like calculating median color of all pixels with alpha >some threshold  being    mixed into iterrative 1 pix max filter expansion of edges to tame down those random color blotches .

 

I asked chatGPT  to make me  a script that would  do it and looks like it fixes the issue completly . No fridge at all after initial re-scale or 2d transform:    

# solidify_max_tint_blend.py
# Iterative max-dilate with tinting, then composite original PNG back on top

from PIL import Image
import numpy as np
import cv2
import os

def compute_median_color(rgb, alpha, alpha_threshold=100):
mask = alpha > alpha_threshold
if not np.any(mask):
return np.array([0, 0, 0], dtype=np.uint8)
return np.median(rgb[mask], axis=0).astype(np.uint8)

def max_iter_with_blend(input_path, output_path,
iterations=5, radius=1,
alpha_threshold=100, mix_strength=20,
final_blend_thr=10):
img = Image.open(input_path).convert("RGBA")
orig = np.array(img)

base_rgb = orig[:, :, :3].copy()
alpha = orig[:, :, 3].copy()
composite_rgb = base_rgb.copy()

median_color = compute_median_color(base_rgb, alpha, alpha_threshold)

# iterative dilation + tint
prev_alpha = alpha.copy()
for i in range(iterations):
new_alpha = cv2.dilate(
prev_alpha,
cv2.getStructuringElement(cv2.MORPH_RECT, (radius*2+1, radius*2+1))
)

blurred_rgb = composite_rgb.copy()
for c in range(3):
blurred_rgb[:, :, c] = cv2.dilate(
blurred_rgb[:, :, c],
cv2.getStructuringElement(cv2.MORPH_RECT, (radius*2+1, radius*2+1))
)

halo_mask = ((new_alpha > 0) & (prev_alpha == 0))[:, :, None]

# Tint halo (and original too, since this was the "ok" version)
if mix_strength > 0:
strength = mix_strength / 100.0
blurred_rgb = ((1 - strength) * blurred_rgb +
strength * median_color).astype(np.uint8)

# Merge halo
composite_rgb = np.where(halo_mask, blurred_rgb, composite_rgb)

prev_alpha = new_alpha

# Final step: put original image back on top where alpha > final_blend_thr
mask_orig = (alpha > final_blend_thr)[:, :, None]
final_rgb = np.where(mask_orig, base_rgb, composite_rgb)

out = Image.fromarray(final_rgb.astype(np.uint8), "RGB")
out.save(output_path, "PNG")
print(f"Saved: {output_path}")


if __name__ == "__main__":
input_files = [
"[OctaneExport]0081_DenoisedBeauty.png",
"test.png"
]

ITERATIONS = 8
RADIUS = 1
ALPHA_THRESHOLD = 100
MIX_STRENGTH = 20 # tint intensity (%)
FINAL_BLEND_THR = 10 # keep original where alpha > this

current_folder = os.path.dirname(os.path.abspath(__file__))
output_folder = os.path.join(current_folder, "max_tint_blend")
os.makedirs(output_folder, exist_ok=True)

for file in input_files:
in_path = os.path.join(current_folder, file)
if not os.path.exists(in_path):
print(f"⚠️ Missing: {file}")
continue
out_path = os.path.join(
output_folder,
os.path.splitext(file)[0] + f"_maxblend{ITERATIONS}_mix{MIX_STRENGTH}.png"
)
max_iter_with_blend(in_path, out_path,
iterations=ITERATIONS,
radius=RADIUS,
alpha_threshold=ALPHA_THRESHOLD,
mix_strength=MIX_STRENGTH,
final_blend_thr=FINAL_BLEND_THR)

print("\n✅ Done. Check 'max_tint_blend' folder.\n")

 

 

Imo we need something like that GPU powered  in Designer  when dropping  RGBA image  before its being scaled to power of two as an option at least  .   the script needs Pillow component to run.      

 

I mean this script solves the issue for me  but would be nice if Designer had something like that  calculated by GPU  for speed  on its own as a preferences option .  So it could read directly  from Octane or any other renderer png output  to composite 3d rendered details into materials  with.  Would be  smooth and easy ? 

Consider it's as a feature request rather than a bug please.

 

 

Community Manager
August 17, 2025

Btw I think a premultiplied workflow should work and you don't necessarily need 32 bit to store premultiplied. You just have to set the blend mode to premult and change the 2d view option to display the premultiplied alpha properly over the checker.

Known Participant
August 16, 2025

So the only solution is to scale it to x2 in any 3d party pixel editor before feeding it to Designer .   Its lots of extra pain and a break in otherwise one render button hit process.    I use gpu renders like Cycles and Octane for years instead of the splatter node wich never give you a nice anti-aliazed picture of such things and that "scale to x2 in other soft"  is very  annoying  really.     

Community Manager
August 16, 2025

Painter does not perform the initial upscale, its support the non power of two. This is not the case of the Substance engine that computes the images within the graph. It's always been like this and is unlikely to change...

You can work with rectangular textures but these have to be power of two (1024x512).

Known Participant
August 16, 2025

Yeah, I see   but why Painter does it without any halo  or any other 2d soft  . Could we have it fixed for input images   so we don't have to crop them all the time  at least and in cases where texel size is supposed to be  slightly stretched in the game.    The input node reads it only once  and some solution could be implemented I think .  Like calculating median of all not black pixels and  replacing  black pixels with this color  maybe.   Maybe as an option?      

 

I actually have no idea how other 2d  soft workaround  it including SPainter  . So maybe there is just right approach for this .

Community Manager
August 16, 2025

The graph only supports power of two images, if you drop this 1024x1307px in the graph, it'll be automatically upscalled to 1024x2048, this ressampling overcomes the 1px straight alpha of your input and will average values with the black beyond.

Known Participant
August 16, 2025

Yes, when you  crop the image  and it match x2 scale it's perfectly ok .   The problem occures when you drop not perfectly X2 image and input node scale it  or when you need to scale it later in node flow .               it's not very eye catchy or noticable  but it's still there . Try to scale it not x2                                                                                                                                                                                                          

Community Manager
August 16, 2025

I cropped your image to 1024x1024.
I don't seem to get a black halo..

 

Known Participant
August 16, 2025

I assume I could use the distance node probably to expand and cover black  areas in RGB pasrt and re-apply alpha back  before any 2d transform  to fix this  but I can't do it for input node which scales png down to x2  when I drop it in .

 

Here a screen of this rendered dry grass  scaled down to 2k x 2k   left in Designer , right in Affinity Photo. You clearly can see dark fringe  in Designer side . Bottom is same image with alpha off  : typical un-premultiplied render .

Known Participant
August 16, 2025