Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • D dblclockfft
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 3
    • Issues 3
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Dan Gisselquist
  • dblclockfft
  • Issues
  • #15
Closed
Open
Issue created Jan 12, 2023 by KlepD-SAL@KlepD-SAL

Output scaling factor

Hello!

In the spec, it is mentioned that the core calculates the regular DFT "to within a scale factor".

After trying out the FFT core with several different configurations, it is not quite clear to me what this scaling factor is or where it comes from.

As a reference, I compared the FFT core's results to a Python FFT (NumPy, as well as a manual DFT loop) and came to the following conclusion:

Input bitwidth (N) Output bitwidth FFT length (F) Scale factor
8 10 8 2
8 11 16 1
8 11 32 4
8 12 64 2
8 12 128 8
8 13 256 4
8 13 512 16

(Command for core generation: fft-gen -f F -n N -p 1024 -x 4)

The output bitwith was not manually capped, so the generator decided how many bits are required. The same test was performed with an input bitwidth of 16, leading to the same scale factors. So I assume that these are deterministic and due to them being all factors of 2, I also assume that they are the result of certain implementation details (shifting, truncating, etc.).

Unfortunately, I didn't find more information on that topic in other issues, the blog post, spec, or README.

Would it be possible to elaborate on that?

I'm also a little bit surprised that no one already mentioned this since it seems pretty important to me to have the correct scaling factor in order to get "correct" results (i.e.: the same as in a co-simulation, for example).

Thanks a lot!

Assignee
Assign to
Time tracking