JS for Bug Bounties 2.0 Extreme Edition 2024

Kongsec
5 min readJun 7, 2024

--

Hi everyone,

I am Aditya Shende aka Kongsec from India. A Bounty Hunter , Biker , Researcher and Trainer . As I am training people in Bug Bounty from last 5 years and reading articles I always found gap. Sharing WHAT you exploited and HOW you exploited is very different . Many researchers share what vulnerability they exploited but did not shared the methodology how they can find this kind of things at large scale is still under the hood.

I believe this article may inspire people to share the techniques more instead of more focusing on what BUG they got . Lets fire the water 🔥

This article is totally upgraded version for the this special article .

Lets jump , We can use diffrent tools like following :

  • hakrawler — Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application
  • crawley — fast, feature-rich unix-way web scraper/crawler written in Golang.
  • katana — A next-generation crawling and spidering framework
  • LinkFinder — A python script that finds endpoints in JavaScript files
  • JS-Scan — a .js scanner, built in php. designed to scrape urls and other info
  • LinksDumper — Extract (links/possible endpoints) from responses & filter them via decoding/sorting
  • GoLinkFinder — A fast and minimal JS endpoint extractor
  • BurpJSLinkFinder — Burp Extension for a passive scanning JS files for endpoint links.
  • urlgrab — A golang utility to spider through a website searching for additional links.
  • waybackurls — Fetch all the URLs that the Wayback Machine knows about for a domain
  • gau — Fetch known URLs from AlienVault’s Open Threat Exchange, the Wayback Machine, and Common Crawl.
  • getJS — A tool to fastly get all javascript sources/files
  • linx — Reveals invisible links within JavaScript files
  • waymore — Find way more from the Wayback Machine!
  • xnLinkFinder — A python tool used to discover endpoints, potential parameters, and a target specific wordlist for a given target

But we are getting the same files like others user , Generate findings to be duplicated .

Initial discovery were like following

subfinder -d domain.com | httpx -mc 200 | tee subdomains.txt && cat subdomains.txt | waybackurls | httpx -mc 200 | grep .js | tee js.txt

But what if we bruteforce this words on target domain, or any target you are hunting on.

Here is basic list of words I gathered for testing :

dialogs540f334e628dbce748a8js navigation_secondary55dfd8fe215f8edecd48js dialogsb18150a252f68f70f0c9js navigation_secondary147987372ed67d94de50js buttons147987372ed67d94de50js npmangular-animate8f9be52ce8a521f715a3js mainb18150a252f68f70f0c9js navigation7b5ba7de4b5e5fb011c7js dialogs147987372ed67d94de50js appmain7b5ba7de4b5e5fb011c7js main147987372ed67d94de50js buttons7b5ba7de4b5e5fb011c7js npmangulary-focus-store9327d7778ee0d85c3500js mainfb562f3396222d196abfjs breeze7b5ba7de4b5e5fb011c7js breezeb18150a252f68f70f0c9js breeze30886581e43164d9d721js breeze147987372ed67d94de50js navigationb18150a252f68f70f0c9js appmain147987372ed67d94de50js breezeee32c0b1526644e9b562js main7b5ba7de4b5e5fb011c7js dialogs7b5ba7de4b5e5fb011c7js navigationba64bbac173b1d655721js navigation147987372ed67d94de50js navigation_secondaryb18150a252f68f70f0c9js buttonscf9c75fee1de19837ae7js appmainb18150a252f68f70f0c9js navigation_secondary7b5ba7de4b5e5fb011c7js modalsb0f4a82ac6f25a46dc71js npmangular-ui-calendar423a597b943dc586730djs npmapollo-angular-link-httpe7a942f9925da8411a4ejs npmangular-ui-switch90766204ecd17b03ca76js appmainaf9ea97e6139d8cd52c2js npmapollo-angular-link-http-common87eff82eb4bc194887bfjs npmapollo-angular22f1de8a666515c86242js npmapollo-cache53668769616dc1466d8djs npmapollo-cache-inmemorydaeb4f1b88a15680fd12js buttonsb18150a252f68f70f0c9js npmangular-ui-bootstrapcd3d849d20f1a4f7dfacjs configjs npmattr-accept81d56f5e133bac14feb5js npmapollo-clientf1fffac92f44507c8f3ajs npmbase64-js61d2367f7816d6fec60fjs npmapollo-utilities9e092209349bda108468js npmaxiosb02cc1c0e336b6ce9d09js app147987372ed67d94de50js npmauth0b681a646eef51d083006js npmbraintree24d4f13fb9a355dadc24js npmbabel5fd8b43fabbd6864e9a2js npmcall-bind0f09a0bd48e4dac9d679js npmbreeze-client-labs03a64fb13d406c33bbc8js appaf9ea97e6139d8cd52c2js npmavailable-typed-arrays558d90654f4d4fc2aa04js npmcharacter-entities-legacy7f4022465f0c9c4a6fabjs npmblueimp-load-image3d0d2393c631d92c5a1ejs npmchartjs-color-stringbd3a54729bf6f60404afjs npmapollo-linka5d82a3252db6d3e8d15js npmaria-hiddena316c352eb617c047815js npmckeditorfde05d6a29366eaf2c71js npmcollapse-white-spacebdd075f4c3faca5c940fjs npmcharacter-reference-invalid2f9cdaeeea24c3f3897ejs npmbail2e238f58e0858fcf0e31js npmcolor-convert101a98cb8d9df306dc12js npmchartjs-color703b6867120bd9ebf784js npmbreeze-client75c1a11b2c8e46de7ce4js

So we can use this list of words over new target

HOW we gonna do :

waybackurls “site.com” | grep -Eo ‘https?://[^/]+/[^”]+\.js’ | sed ‘s|^https\?://[^/]\+/||’ | awk -F ‘/’ ‘{print $NF}’

Let’s break down each part of the command:

  • waybackurls "example.com": This command retrieves URLs associated with "example.com" from the Wayback Machine archives.
  • grep -Eo 'https?://[^/]+/[^"]+\.js': This command searches for URLs with a .js extension. The -E flag enables extended regular expressions, and the -o flag tells grep to output only the matching parts.
  • sed 's|^https\?://[^/]\+/||': This command removes the protocol (http:// or https://) and domain name from each URL, leaving only the path.
  • awk -F '/' '{print $NF}': This command extracts the last part of each URL after splitting it by /, effectively removing the domain part.

So, when you run this command, it will give you a list of .js endpoints extracted from archived snapshots while excluding the domain names. Replace "site.com" with your desired domain.

You can see few of the keywords are new and unique. We can curate one js words list from one target and we can use it on new target . For an example,

We got JS words from dell.com and we used those words on data.samsung.com , We can have have new files , stack errors , useful for directory listing.

We can finally get very new JS files on new target , We sort by size , data type , content .

We can use these same keywords on the IPs which you will get from the shodan

Remaining exploitation will be same but small mod is here :

curl -s https://app.site.com/config.js | \
grep -E “environment: ‘Production’|storageUrl: ‘https://buildxact.blob.core.windows.net/'|googleApiKey: ‘|appInsightsInstrumentationKey: ‘|globalApiEndpoint: ‘|streamChatApiKey: ‘|auth0ClientId: ‘|auth0Domain: ‘|flatfileApiKey: ‘|webSpellCheckerServiceId: ‘|webSpellCheckerServiceUrl: ‘|clientPortalUrl: ‘|appVersion: ‘|appVersionDate: ‘|appDomainUrl: ‘|oneBuildKey: ‘|flatfilePlatformPublishableKey: ‘|flatfilePlatformEnvironmentId: ‘“ | \
sed “s/.*’\(.*\)’.*/\1/”

We can add the words which we think they are sensitive

Example:

ANACONDA_TOKEN=
ANALYTICS=
ANDROID_DOCS_DEPLOY_TOKEN=
android_sdk_license=
android_sdk_preview_license=
ANSIBLE_VAULT_PASSWORD=
aos_key=
aos_sec=
API_KEY_MCM=
API_KEY_SECRET=
API_KEY_SID=
API_KEY=
API_SECRET=
APIARY_API_KEY=
APIDOC_KEY
APIGW_ACCESS_TOKEN=
apiKey
apiSecret
APP_BUCKET_PERM=
APP_ID=
APP_NAME=
APP_REPORT_TOKEN_KEY=
APP_SECRETE=
APP_SETTINGS=
APP_TOKEN=
appClientSecret=
APPLE_ID_PASSWORD=
APPLE_ID_USERNAME=
APPLICATION_ID_MCM=
APPLICATION_ID=
applicationCacheEnabled=
ARGOS_TOKEN=
ARTIFACTORY_KEY=
ARTIFACTORY_USERNAME=
ARTIFACTS
ARTIFACTS_AWS_ACCESS_KEY_ID=
ARTIFACTS_AWS_SECRET_ACCESS_KEY=
ARTIFACTS_BUCKET=
ARTIFACTS_KEY=
ARTIFACTS_SECRET=
ASSISTANT_IAM_APIKEY=
ASYNC_MQ_APP_SECRET

Once you get the JS URLs you can use nuclei exposures tag on it get more sensitive information .

To run a Nuclei command on the js.txt file with the exposures tag, you can use the following command:

nuclei -l js.txt -t ~/nuclei-templates/exposures/ -o js_exposures_results.txt

Here’s an explanation of each part of the command:

  • nuclei: This is the command to run Nuclei, a fast and customizable vulnerability scanner.
  • -l js.txt: The -l flag specifies the file (js.txt) containing the list of URLs to scan with Nuclei.
  • -t ~/nuclei-templates/exposures/: The -t flag specifies the path to the Nuclei templates directory for the exposures tag. Adjust the path ~/nuclei-templates/exposures/ to match the actual path where your Nuclei templates are stored.
  • -o js_exposures_results.txt: The -o flag is used to specify the output file (js_exposures_results.txt) where the scan results will be saved. You can replace js_exposures_results.txt with the desired output file name.

Exploitation will be remain same you can reffer to this article , Thanks for reading

Jai Shree Ram

--

--

Kongsec

#kongsec | Solo Bounty Hunter | Function Exploits and Report Crafting | Bikes | Not a XSS guy | Own views | Bugcrowd Top 100 l Top 10 P1 warriors | Biker