'qbareclone' in place of 'bareclone'
(Bitbake rev: 90a3181f1397ae05862f4e89a9bbac606e74504e)
Signed-off-by: Laurent Bonnans <laurent.bonnans@here.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The shallow_tarball check is never true due a check on the caller side.
The tarball check is not related to the code on the caller side.
(Bitbake rev: 086eddcf8c7520ff5c52ce2a11ca9bf5b5fe5d7e)
Signed-off-by: Urs Fässler <urs.fassler@bbv.ch>
Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
To improve the readability we extract the different scenarios of why
the clonedir needs an update.
(Bitbake rev: 9038e029f4f0ab413727de76c74248cbb3cdc9ea)
Signed-off-by: Urs Fässler <urs.fassler@bbv.ch>
Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The prior fetcher did not know how to work with MIRRORS, and did not
honor BB_NO_NETWORK and similar.
The new fetcher approach recursively calls 'gitsm' download on each
submodule detected. This ensures that it will go throug the
standard download process.
Each downloaded submodule is then 'attached' to the original download in
the 'modules' directory. This mimics the behavior of:
git submodule init
but there is no chance it will contact the network without permission.
It then corrects upstream reference URIs.
The unpack steps simply copies the items from the downloads to the destdir.
Once copied the submodules are connected and we then run:
git submodule update
According to the git documentation, git submodule init can and will modify
the project configuration and may connect to the network. Doing the
work manually prevents this. (This manual process is allowed based
on my reading of the documentation.)
See: https://git-scm.com/book/en/v2/Git-Tools-Submodules
The small change to the existing test is due to this new code always assuming
the code is from a remote system, and not a 'local' repository. If this
assumption proves to be incorrect -- code will need to be added to deal
with local repositories without an upstream URI.
(Bitbake rev: 9c6b39adf9781fa6745f48913a97c859fa37eb5b)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Before this fix it is assumed that the removal of the
remote can only fail because there is not remote to remove. This
is a false assumption. Example error which would be ignored:
git -c core.fsyncobjectfiles=0 remote rm origin failed with exit code 1, output:
Note: A branch outside the refs/remotes/ hierarchy was not removed;
to delete it, use:
git branch -d master
error: could not lock config file config
error: Could not remove config section 'remote.origin'
Due to the masking of this error a stranger error will be
presented to the user, because this time we do not mask the
exception:
git -c core.fsyncobjectfiles=0 remote add --mirror=fetch origin https://github.com/ptsneves/tl-wn722.git failed with exit code 128, output:
fatal: remote origin already exists.
The most likely reason that the remote cannot be removed nor
modified is that the DL_DIR/git2 does not have permissions
compatible with the user running bitbake.
This commit fixes:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=12728
(Bitbake rev: 9c86c582a10c9b23abad7d34b6cbf12f7086294d)
Signed-off-by: Paulo Neves <ptsneves@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If uri_find contain parameters then original URI parameters should
be checked against parameters from uri_find instead of parameters
from uri_replace.
(Bitbake rev: 8efa7826a61501589afa33eb698c0ab3a622bf2e)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Currently there is quite some variation between the fetchers in terms
of how they determine the subdirectory within DL_DIR and the base
fetch command to run. Some rely on variables being set externally
(e.g. from bitbake.conf in oe-core), some respect these external
variables but provide fallback defaults and some use only hardcoded
internal values. Try to unify the approach used across the various
fetchers.
(Bitbake rev: efd5e35af4b08501c67e8b30f30d9457f6fdf610)
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Although the submodules' histories have been fetched during the
do_fetch() phase, the mechanics used to clone the workdir copy
of the repo haven't been transferring the actual .git/modules
directory from the repo fetched into downloads/ during the
fetch task.
Fix that, and for good measure also explicitly tell Git to avoid
hitting the network during do_unpack() of the submodules.
[YOCTO #12739]
(Bitbake rev: 11b6a5d5c1b1bb0ce0c5bb3983610d13a3e8f84a)
Signed-off-by: Matt Hoosier <matt.hoosier@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When running bitbake command with Python 3.6.5 always result in
import error causing by the change of distutils module.
This patch replaces the method to search executable in PATH by
"/usr/bin/env <command>".
(Bitbake rev: bd9a1b063633af2936ba1dd87b19202424900151)
Signed-off-by: Tzu Hsiang Lin <t9360341@ntut.org.tw>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
For mirrors or premirrors defined like: "http://.*/.* http://somewhere.org"
fetching ends with errors because function fetch2/__init__.py:encodeurl()
creates url like "http://somewhere.orgsomefile.tar.gz".
It happens because function fetch2/__init__.py:decodeurl()
for url "http://somewhere.org" returns
['http', 'somewhere.org', '', '', '', {}]
and then in function fetch2/__init__.py:uri_replace()
variable result_decode will be
['http', 'somewhere.org', 'somefile.tar.gz', '', '', {}]
(because of line: result_decoded[loc] = os.path.join(result_decoded[loc], basename))
for which encodeurl returns "http://somewhere.orgsomefile.tar.gz".
In addition for mirror "http://.*/.* http://somewhere.org/"
everything works fine.
(Bitbake rev: d822ae24ef5485e550804cbd9130ebd73b2aa48e)
Signed-off-by: Jakub Dębski <jdebski@enigma.com.pl>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Updating a local git repo clone currently results in multiple calls
to self._contains_ref(), some of which appear to be redundant and can
be eliminated by minor tweaks to the logic in download().
Also drop redundant calls to os.path.exists(ud.clonedir) before
self.need_update(), since need_update() includes its own built-in
check for the existance of ud.clonedir.
(Bitbake rev: 61b0df5523afc8f805043f3adc9c106690e6f133)
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
In order to allow users to manually populate the download directory with
valid content change the assumption that missing the donestamp file
means unfetched content.
This allows users to populate the download dir without needing to create
dummy .done files such that a user does not need a PREMIRROR when using
BB_NO_NETWORK to provide valid content files in the download directory.
To ensure the correct result this change also fails first if the
localpath does not exist. This prevents further parts of the function
attempting to calculating the checksum on non-existent files. This also
fixes some edge conditions around where if the donestamp exists but the
localpath does not it returns, and did not remove the donestamp.
Also added test cases to cover this use case and additional use cases
where for example the fetcher does not support checksums.
(Bitbake rev: a335dbbb65d5b56e71d98cf3e4fa9bfbec1dcde6)
Signed-off-by: Nathan Rossi <nathan@nathanrossi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
At least the cli-color node module has dependencies that have
cyclic dependency among themselves. npm.py is prepared to deal
with such a case but the condition is handled only for downloading
or not a dependency again, but then it goes checking the its
dependency which causes an infinite loop in _getdependencies().
Make this function simply return when a dependency is already
downloaded and only download and check its dependencies when not.
(Bitbake rev: 545540420112992e53f4a83104af10452df168d0)
Signed-off-by: Zoltán Böszörményi <zboszor@pr.hu>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Move the code that existed in tests/fetch.py for determining the path to
'git-make-shallow' into the git module and reference it.
This ensures that 'git-make-shallow' is always available and the desired
version regardless of the path variable or whether git exposes the
command.
(Bitbake rev: 6b508ab8fd5aa796c1c00c970e81e5e93f84d35d)
Signed-off-by: Nathan Rossi <nathan@nathanrossi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
We've noticed issues on our infrastucture iterating over the many
tag/branch/head reference files that some git repositories may contain.
By issuing the pack-refs command, we move these all to a single file
which speeds up operations with the mirror repos in the downloads
directory in general.
(Bitbake rev: f8126aaf774186a6eaf0bd4067b89c074594886c)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If a fetcher, e.g., git, is run when pseudo is active it will think it
is running as root. If it in turn uses ssh (as git does), ssh too will
think it is running as root. This will cause it to try to read root's
ssh configuration from /root/.ssh which will fail. If ssh then needs to
ask for credentials it will hang indefinitely as there is nowhere for it
to ask the user for them (and even if there was it would not access the
correct private keys).
The solution to the above is to temporarily disable pseudo while
executing any fetcher commands. There should be no reason for them to be
executed under pseudo anyway so this should not be a problem.
RP Ammendum:
We finally did get more information about how to reproduce this problem,
something needs to trigger bb.fetch2.get_srcrev() in a pseudo context,
for example when AUTOREV is in use or the recipe doesn't have a defined
SRCREV. That SRC_URI needs to be using protocol=ssh. This would trigger
an ls-remote of the remote repo and if that happens under pseudo, the
wrong ssh credentials may be attempted which can hang.
[YOCTO #12464]
(Bitbake rev: ceaca281cafa662aa2385b95641bce309dce843d)
Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
.txz is the same as .tar.xz, and can be found in the wild.
(Bitbake rev: 2ba8a6b25ccc12e7b543e8450121e5311c7a701d)
Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The connection cache class uses a dummy file object but it doesn't have a closed
attribute, so we can't use it in a context manager.
(Bitbake rev: 7b072ef91d16331eae11bd60f229ce1f0c175995)
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
After cleaning deprecated API usage repo fetcher is missing
logger as it was indirectly imported via deprecated bb.data.
Fix this by importing logger directly.
Fixes: 9752fd1c10b8 ("fetch2: don't use deprecated bb.data APIs")
(Bitbake rev: f8e027d26603db2f1fe757dca767ea35d95174c7)
Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Since 2017-08-17 (git version 2.14.1.473.g3ec7d702a) using deprecated
git branch parameter "--set-upstream" causes a fetcher error. Replace
it by "--set-upstream-to".
https://git.kernel.org/pub/scm/git/git.git/commit/?id=52668846ea2d41ffbd87cda7cb8e492dea9f2c4d
says, it's deprecated since 2012-08-30 so hopefully all still supported
host distributions have new enough git to support "--set-upstream-to".
ERROR: PACKAGE do_unpack: Fetcher failure: ...;
git -c core.fsyncobjectfiles=0 branch --set-upstream master origin/master failed with exit code 128, output:
fatal: the '--set-upstream' option is no longer supported. Please use '--track' or '--set-upstream-to' instead.
ERROR: PACKAGE do_unpack: Function failed: base_do_unpack
(Bitbake rev: 2ab50074c1a6c56a8a178755de108447d7b7acaf)
Signed-off-by: Andre Rosa <andre.rosa@lge.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When fetching source for the first time within scripts such as
OpenEmbedded's recipetool, we don't want to be showing warnings about
NPM_SHRINKWRAP or NPM_LOCKDOWN not being set since there's no way we
could have set them in advance. Previously we were using
ud.ignore_checksums to suppress these but since we are now using a more
standard task-based path to fetch the source, we need to disable these
through the metadata. Look for a "noverify" parameter set on the npm URL
and skip the checks if it is set to "1".
(Bitbake rev: 8c4b35d1e4d31bae9fddd129d5ba230acb72c3bb)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
URLs do not have to have a path; currently our npm URLs don't, so
encodeurl() needs to handle if the path element isn't specified. This
fixes errors using OpenEmbedded's devtool add / recipetool create on an
npm URL after OE-Core revision ecca596b75cfda2f798a0bdde75f4f774e23a95b
that uses decodeurl() and encodeurl() to change URL parameter values.
(Bitbake rev: d5cab2dbf5682d2fd08e58316a3bf39a10f63df2)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
My previous assertion about FusionForge appears to have been wrong, or
FusionForge has changed behaviour, or both.
FusionForge now mandates that downloads have the Accept header set, despite that
header being optional, and returns a 406 Not Acceptable error if it isn't set.
As we were pretending that 406 was actually 405 (Moved) and tried to handle it as a
redirect this results in an infinite loop until Python kills the recursion.
Delete the handling of 406 as 405, and pass Accept: */* in the headers.
(Bitbake rev: bb70ae0c9aac5ec688026d23a64ac0cac1947187)
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The checkstatus() code was expecting checkstatus to throw exceptions if it
failed, but in general it should return False.
(Bitbake rev: 57be5cc6228518e60f564570a39cebbeb6cf564e)
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When wget is fetching a listing for a directory over FTP it writes to a
temporary file called .listing in the current directory. If there are many such
operations happening in parallel - for example during 'bitbake world -c
checkpkg' - then up to BB_NUMBER_THREADS instances of wget will be racing to
write to, read, and delete the same file.
This results in various failures such as the file disappearing before wget has
processed it or the file changing contents, which causes checkpkg to randomly
fail.
Mitigate the race condition by creating a temporary directory to run wget in
when doing directory listings.
[ YOCTO #11828 ]
(Bitbake rev: 91d4ca93df092cf86ab84faaa94cc66ff9f43057)
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When the sstate is accessed via HTTP, the existence check can fail due
to network issues, in which case bitbake silently continues without
sstate.
One such network issue is an HTTP server like Python's own SimpleHTTP
which closes the TCP connection despite an explicit "Keep-Alive" in
the HTTP request header. The server does that without a "close" in the
HTTP response header, so the socket remains in the connection cache,
leading to "urlopen failed: <urlopen error [Errno 9] Bad file
descriptor>" (only visible in "bitbake -D -D" output) when trying to
use the cached connection again.
The connection might also get closed for other reasons (proxy,
timeouts, etc.), so this is something that the client should be able
to handle.
This is achieved by checking for the error, removing the bad
connection, and letting the check_status() method try again with a new
connection. It is necessary to let the second attempt fail
permanently, because bad proxy setups have been observed to also lead
to such broken connections. In that case, we need to abort for real
after trying twice, otherwise a build would just hang forever.
[YOCTO #11782]
(Bitbake rev: 6fa07752bbd3ac345cd8617da49a70e0b2dd565f)
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If BB_STRICT_CHECKSUMS is set to anything other than "1" i.e. we're not
going to raise an error, then fire an event so that scripts can listen
for it and get the checksums.
(Bitbake rev: 8b2ccb4b865f2df118ef668847df682a83f9c500)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If BB_STRICT_CHECKSUMS is set to "ignore" then don't display a warning
if no checksums are specified in the recipe. This is not intended to be
used from recipes - it is needed when we move to using more standard
code paths to fetch new files from scripts i.e. where we don't know what
the checksums are in advance.
(Bitbake rev: f15ca7339de8a448a93a14cf6130b3925178a920)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
'mirrortarball' is supposed to be a local variable to the function.
(Bitbake rev: a457cbfb1f20a47db3978290921d0708cd96bd70)
Signed-off-by: Ismo Puustinen <ismo.puustinen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Ensure that when an item fetched from a premirror has an invalid checksum the
fetcher falls back to the usual logic of trying the upstream and any configured
mirrors.
(Bitbake rev: 022adb30dbb0df764c9fb515918cb9a88e4f8d6f)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
In certain cases, it's valuable to be able to exert more control over what
history is removed, beyond srcrev+depth. As one example, you can remove most
of the upstream kernel history from a kernel repository, keeping predominently
the non-publically-accessible content. If the repository is private, the
history in that repo couldn't be restored via `git fetch --unshallow`, but
upstream history could be.
Example usage:
# Remove only these revs, not at a particular depth
BB_GIT_SHALLOW_DEPTH_pn-linux-foo = "0"
BB_GIT_SHALLOW_REVS_pn-linux-foo = "v4.1"
(Bitbake rev: 97f856f0455d014ea34c28b1c25f09e13cdc851b)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When we're building from a shallow mirror tarball, we don't want to do
anything with ud.clonedir, as it's not being used when we unpack. As such,
disable updating annex in that case. Also include annex files in the shallow
tarball.
(Bitbake rev: ca0dd3c95502b22c369fbf37f915f45e02c06887)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When we're building from a shallow mirror tarball, we don't want to do
anything with ud.clonedir, as it's not being used when we unpack. As such,
disable updating the submodules in that case. Also include the repositories in
.git/modules in the shallow tarball. It does not actually make the submodule
repositories shallow at this time.
(Bitbake rev: 6c0613f1f2f9d4f009545f82a9173e80396f9d34)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
By default, all unused refs (branches & tags) are removed from the repository,
as shallow processing scales with the number of refs it has to process. Add
the ability to explicitly specify additional refs to keep. This is
particularly useful for recipes with custom checkout processes, or whose
git-based versioning requires a tag be available (i.e. for `git describe
--tags`). The new `BB_GIT_SHALLOW_EXTRA_REFS` variable is a space-separated
list of refs, fully specified, and support wildcards.
Example usages:
BB_GIT_SHALLOW_EXTRA_REFS = "refs/tags/v1.0"
BB_GIT_SHALLOW_EXTRA_REFS += "refs/heads/*"
(Bitbake rev: 1771934cd9f8b5847c6fcae0a906fb99d6b0db16)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Allow the user to explicitly adjust the depth for named urls/branches. The
un-suffixed BB_GIT_SHALLOW_DEPTH is used as the default.
Example usage:
BB_GIT_SHALLOW_DEPTH = "1"
BB_GIT_SHALLOW_DEPTH_doc = "0"
BB_GIT_SHALLOW_DEPTH_meta = "0"
(Bitbake rev: 9dfc517e5bcc6dd203a0ad685cc884676d2984c4)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
This adds support to the git fetcher for fetching, using, and generating
mirror tarballs of shallow git repositories. The external git-make-shallow
script is used for shallow mirror tarball creation.
This implements support for shallow mirror tarballs, not shallow clones.
Supporting shallow clones directly is not really doable for us, as we'd need
to hardcode the depth between branch HEAD and the SRCREV, and that depth would
change as the branch is updated.
When BB_GIT_SHALLOW is enabled, we will always attempt to fetch a shallow
mirror tarball. If the shallow mirror tarball cannot be fetched, it will try
to fetch the full mirror tarball and use that. If a shallow tarball is to be
used, it will be unpacked directly at `do_unpack` time, rather than extracting
it to DL_DIR at `do_fetch` time and cloning from there, to keep things simple.
There's no value in keeping a shallow repository in DL_DIR, and dealing with
the state for when to convert the clonedir to/from shallow is not worthwhile.
To clarify when shallow is used vs a real repository, a current clone is
preferred to either tarball, a shallow tarball is preferred to an out of date
clone, and a missing clone will use either tarball (attempting the shallow one
first).
All referenced branches are truncated to SRCREV (that is, commits *after*
SRCREV but before HEAD are removed) to further shrink the repository. By
default, the shallow construction process removes all unused refs
(branches/tags) from the repository, other than those referenced by the URL.
Example usage:
BB_GIT_SHALLOW ?= "1"
# Keep only the top commit
BB_GIT_SHALLOW_DEPTH ?= "1"
# This defaults to enabled if both BB_GIT_SHALLOW and
# BB_GENERATE_MIRROR_TARBALLS are enabled
BB_GENERATE_SHALLOW_TARBALLS ?= "1"
(Bitbake rev: 5ed7d85fda7c671be10ec24d7981b87a7d0d3366)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Remove ud.mirrortarball in favor of ud.mirrortarballs. Each tarball will be
attempted, in order, and the first available will be used. This is needed for
git shallow mirror tarball support, as we want to be able to use either
a shallow or full mirror tarball.
(Bitbake rev: 02eebee6709e57b523862257f75929e64f16d6b0)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
This has long since been deprecated and is no longer used anywhere, FILESPATH
is the commonly used varaible which offers much more flexibility. Remove
the FILESDIR code and references from bitbake.
(Bitbake rev: 751c9dc51fd01fa64a1ff37ba2638110335f71af)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When the fetcher retrieves file:// URLs, there is no lock file being
used. This means that in case two separate tasks (typically from two
concurrent invocations of bitbake) want to download the same file://
URL at the same time, there is a very small chance that they also end
up wanting to create a symbolic link to the file at the same time.
This would previously lead to one of the tasks failing as the other
task would have created the link.
(Bitbake rev: 58a03531c8183b165bb7dcad86d8559c92bc150d)
Signed-off-by: Peter Kjellerstedt <pkj@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
URL decoding was improved in the core a while ago and this looks like
a leftover from those times which caused urls needing a user/password to
fail. Use the parameters from the core instead of the broken split
implementation.
[YOCTO #11262]
(Bitbake rev: 6a917ec99d659e684b15fa8af94c325172676062)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
This commit looks to see if FETCHCMD_s3 is set and if not, sets
it.
This is needed because I've use cases where I don't use aws, but
s3cmd (due to license).
(Bitbake rev: fdeaed70a7d1ff8be1a1de937cb864130b0c2c86)
Signed-off-by: Elizabeth 'pidge' Flanagan <pidge@toganlabs.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Imagine you have an sstate mirror accessed over http and an SSTATE_MIRRORS
which maps file:// urls to http:// urls.
File urls set donestampneeded = False, http urls don't. This can result in
races in the try_mirror_url() code since it will trigger new downloads after
aquiring the lockfile as verify_donestamp() doesn't look at origud and there
is no donestamp.
verify_donestamp() already has code to look at origud, we're just missing
some code at the start of the function to do this. Fix it to avoid
these races.
(Bitbake rev: b8b14d975a254444461ba857fc6fb8c725de8874)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
We call git ls-remote to get the latest revision from a git repository,
however by calling runfetchcmd() we can end up recursively running
git ls-remote a number of times with OE e.g. if ${SRCPV} is in PV, ${PV}
is in WORKDIR, and ${WORKDIR} is in PATH (as a result of recipe-specific
sysroots), our call to runfetchcmd() exports PATH so _lsremote() will
get called again - with the end result that we run git ls-remote 30
times in quick succession (!). Prevent that from happening by using a
guard variable and returning a dummy value if it's called recursively.
Fixes [YOCTO #11185].
(Bitbake rev: ff1ccd1db5d70b3fc9ad0d3e8f3d7b804c22bf36)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
FetchError isn't defined, use bb.fetch2.FetchError in this context.
(Bitbake rev: 945fa980e027753df2c21d84eb63dcaddb2caaee)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The touch of .done explicitly specifies the path, so there's no need for
workdir=, and "os.path.join('.')" is identical to just '.'.
(Bitbake rev: 955cbfdaa2400d15ec428b65848e6835c9f44860)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
list.index() isn't a particularly efficient operation, so keep track of our
position via enumerate() instead, which is more pythonic as well.
(Bitbake rev: dec6e90a4d27ee335e9c78aeebd277098fec94d1)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Treat tar files compressed with 7-Zip in the same way as tar files
compressed with other compression formats.
(Bitbake rev: 363a0f54dc7d9930537f0df25173fa31ca1f98ac)
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Class for fetching files from Amazon S3 using the AWS Command Line
Interface. The aws tool must be correctly installed and configured
prior to use.
The class supports both download() and checkstatus(), which therefore
allows S3 mirrors to be used for SSTATE_MIRRORS.
(Bitbake rev: 6fe07ed25457dd7952b60f4b2153d56b15d5eea6)
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Forcing the use of "\n" in mirror variables is pointless, we can just require that
there are pairs of values.
(Bitbake rev: 044fb04dbe69313ee6908bf4d3cee7f797d0c41c)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Cleanup some more usage of bb.data APIs in the fetchers.
(Bitbake rev: 9752fd1c10b8fcc819822fa6eabc2c1050fcc03b)
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Most angular2 packages have names of the form @angular/xxx.
The / obviously can't be used in a file name, replace it with -.
(Bitbake rev: d3bd41d0ec9621307c362b394872b18b8b7ed8d6)
Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
A number of npm packages use @ as a leading chararacter.
Examples are most of the angular2 packages.
(Bitbake rev: 628c4bf6c89b3d62c9b864380b5c8e131a899bff)
Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The functionality around the 'rsh' parameter seemed to be broken. The
'rsh' parameter was only used when 'protocol' was set to 'svn+ssh' which
is confusing. The 'rsh' parameter was used for setting the value of
'svn_RSH' environment variable, which however, is not supported by svn
(not at least according to SVN documentation).
This patch removes the 'rsh' parameter and replaces it with 'ssh'. This
new (optional) parameter is used when svn+ssh protocol is used and it
can be used to specify the ssh program used by svn. This is achieved by
setting the SVN_SSH environment variable which is mentioned in SVN
documentation.
(Bitbake rev: 5b364b02270b0d7c2b7ca8d67fa2731bf93720ee)
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
- In some cases the file descriptor
is held by nfs client and none of os.path.* is catching
that, it could mean that error is not doled out because
client has cached the stat info. In this case we are
out of luck. Needed to catch IOError, which would be
causing the Stale error.
- In download method, update_stamp is invoked
md5sum validation which is found to be throwing
Stale errors.
- Added error handling to fix the stale errors.
(Bitbake rev: 5a53e7d7b017769a6eb0f0a6335735a1fe51a5ec)
Signed-off-by: Balaji Punnuru <balaji_punnuru@cable.comcast.com>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
For spelling's sake, rename Python routine "setup_revisons" to
"setup_revisions."
(Bitbake rev: 4df59b027c02ef39d72476251ccd3fd62fc20bf6)
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(Bitbake rev: 05f5421b2e44cd58c5912848de43d5884d070150)
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When I originally added this check I didn't quite understand how the
values in this field should be expressed - it seems from reading the
documentation if there is an entry starting with '!' then the list is
a blacklist and we shouldn't expect "linux" to be in the list, or we'll
end up skipping important dependencies.
This fixes fetching the "statsd" npm package.
Fixes [YOCTO #10760].
(Bitbake rev: 7aa6d1586417e0e7d9925917a82caee5884957db)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
An npm package.json file has two dependency fields: dependencies and
optionalDependencies. An item in optionalDependencies *may* also be
listed in dependencies, but this is not required (and not necessary
since if it's in optionalDependencies it will be optional, adding it to
dependencies won't do anything). The code here was assuming that an
optional dependency would always be in both, that's probably because
that was true of the examples I was looking at at the time. To fix it,
just add the optional ones to the list we're iterating over.
(Bitbake rev: c0c50d43266150a80be31ae2c6fcaf37f5ba231d)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If PATH contains WORKDIR which contains PV which contains SRCPV we can end
up in circular recursion within the fetcher. This code change allows for the recursion
to be broken by giving PV a temporary dummy value in a data store copy.
(Bitbake rev: ce1e70b8018340b54dba3a81d7d379182cb77514)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When the gitsm fetcher is used with a repo that includes a .gitattributes
file that makes git modify files on cloning (e.g. line break characters),
the subsequent checkout performed in the update_submodules function fails.
This is fixed by adding the force flag (-f) to the checkout command.
(Bitbake rev: c05e1396625b14e66d795408ea2ae4cd2afc3209)
Signed-off-by: Ola Redell <ola.redell@retotech.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Commit 873e33d0479e977520106b65d149ff1799195bf6 [fetch2/wget:
add Basic Auth from netrc to checkstatus()] causes "Fetcher failure
for URL: 'https://www.example.com/'. URL https://www.example.com/
doesn't work." on new builds when a user has a .netrc file but there
is no default and no matching host. The call to netrc.authenticators()
will return None in these cases and the attempted assignment to the
3-tuple will raise a TypeError exception. Add the TypeError to the
exceptions caught to get around this issue.
(Bitbake rev: c0c0af40ebddaf9dc99353c580a65d4c04295613)
Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
fetch2/wget uses urllib to check the status of the mirrors, wget will
use netrc to pass login and password information however checkstatus
will skip that.
This adds netrc login and password to checkstatus so both will work the
same.
(Bitbake rev: 873e33d0479e977520106b65d149ff1799195bf6)
Signed-off-by: Matthew McClintock <msm-oss@mcclintock.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
getVarFlag() now defaults to expanding by default, thus remove the
True option from getVarFlag() calls with a regex search and
replace.
Search made with the following regex:
getVarFlag ?\(( ?[^,()]*, ?[^,()]*), True\)
(Bitbake rev: c19baa8c19ea8ab9b9b64fd30298d8764c6fd2cd)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
getVar() now defaults to expanding by default, thus remove the True
option from getVar() calls with a regex search and replace.
Search made with the following regex: getVar ?\(( ?[^,()]*), True\)
(Bitbake rev: 3b45c479de8640f92dd1d9f147b02e1eecfaadc8)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The old style bb.data.getVar/setVar API is obsolete. Most of bitbake
doesn't use it but there were some pieces that escaped conversion. This
patch fixes the remaining users mostly in the fetchers.
(Bitbake rev: ff7892fa808116acc1ac50effa023a4cb031a5fc)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If ud.ignore_checksums is set (which we currently use to suppress the
warnings for missing SRC_URI checksums when fetching files from
scripts), then if we're fetching an npm package we should similarly
suppress the warnings when NPM_LOCKDOWN and NPM_SHRINKWRAP aren't set.
At the same time, make any errors reading either of these files actual
errors since if the file is specified and could not be found, that
should be an error - not the exact same warning.
Fixes [YOCTO #10464].
(Bitbake rev: cefb8c93c8299e68352cf7ec5ad9ca50c0d499ed)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Currently if you use the subdir parameter in a SRC_URI and pass an absolute path
then it gets appended to the unpack directory instead of being used directly.
This is inconvenient as it may be useful to use ${S} when you want to unpack a
file into the source tree.
Change this behaviour so that absolute paths are used directly instead of being
appended to the root directory. To ensure that recipes cannot write files to an
arbitrary location enforce that the subdir starts with the unpack root.
(Bitbake rev: c3873346c6fa1021a1d63bddd9b898a77c618432)
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The implementation of SRCREV_FORMAT has at least two issues:
1. Given two names "foo" and "foobar" and SRCREV_FORMAT = "foo_foobar",
"foo" might currently get substituted twice, and "foobar" not at
all.
2. If the revision substitued for some name happens to contain another
name as a substring, then that substring might incorrectly get
replaced.
Fix both issues by sorting the names with the longest ones first and
replacing all names at once with a regular expression. This was inspired
by
http://stackoverflow.com/questions/6116978/python-replace-multiple-strings.
(Bitbake rev: 8e6a893cb7f13ea14051fc40c6c9baf41aa47fee)
Signed-off-by: Ulf Magnusson <ulfalizer@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If we've already fetched a particular URL then we do not need to do so
again within in the same operation. Maintain an internal list of fetched
URLs to avoid doing that.
(Bitbake rev: b4705c80add1f618c11a9223cdd9578d763b50ec)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The correct name of the parameter is "version" not "ver" so ensure we
aren't misleading the user by giving the latter in an example.
(Bitbake rev: 14c045c6a20993d389b91ae2459d811a1430a7b2)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Allow using a top-level shrinkwrap file with one or more npm://
dependencies, i.e. if the module isn't found at the top level then look
one level down.
Part of the fix for [YOCTO #9537].
(Bitbake rev: f7de3f8b5f628dee043fe783148812914ab20813)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
"npmpkg" can be a default, but it should respect the subdir parameter as
with other FetchMethods. This allows us to have more than one npm://
entry in SRC_URI without nasty hacks.
Fix required in order to support [YOCTO #9537].
(Bitbake rev: e6a94d2091ec5d42f25102334a8492a731b8dec3)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
You cannot set a URL-specific value in an object-level variable on
the FetchMethod in urldata_init() or the result is the value specific to
the last URL will be the one that gets set. This prevented fetching more
than one npm:// URL correctly - the other tarballs would not download to
the correct location and do_unpack failed to find them as a result.
Fix required in order to support [YOCTO #9537].
(Bitbake rev: 1435b49ea7d0f9d4cc4a665fb2aa83d1eea7900f)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
We were downloading into the current directory here, which is fine if
that current directory can be expected to be the right place - but
that's not true when called from recipetool within OE. We should
explicitly specify the directory to run the command in and then there
won't be a problem.
(Bitbake rev: 0ddaf725e5a0675b252b7f80b1706370e478175b)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The ud.pkgdir argument was being passed as the 'quiet' argument to
runfetchcmd, not the 'workdir' argument, resulting in fetching the svn module
into the root of DL_DIR, not where it belongs.
Cc: Matt Madison <matt@madison.systems>
(Bitbake rev: dc756510a95f88b192352be6fcd1d5d77852c348)
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
If http basic auth creds were added to sstate mirrors like so:
https://foo.com/sstate/PATH;user=foo:bar;downloadfilename=PATH
The sstate mirror check would silently fail with 401 unauthorized.
This patch allows both the check, and the wget download to succeed by
checking for user credentials and if present adding the correct
headers, or wget params as needed.
[ YOCTO #9815 ]
(Bitbake rev: cea8113d14da9e12db80a5b6b5811a47a7dfdeef)
Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
We need a separate fetcher cache per multiconfig as the revisions and other
SRC_URI data can potentially be different. For now, this is the simplest way
to achieve that and avoids linux-yocto kernel build failures when targeting
multiple machines for example.
(Bitbake rev: d98cc31d6668bc1d6372664593126b5e5132ef2c)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Now that the fetchers all preserve the current working
directory, the cwd changes in the try_mirror_url,
download, and checkstatus methods are no longer needed.
(Bitbake rev: 0ed8975c42718342a104a9764a58816f964ec4ea)
Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Fix the methods in all fetchers so they don't change
the current working directory of the calling process, which
could lead to "changed cwd" warnings from bitbake.
(Bitbake rev: 6aa78bf3bd1f75728209e2d01faef31cb8887333)
Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Introduce a new 'usehead' url parameter for git repositories. Specifying
usehead=1 causes bitbake to use whatever commit the repository HEAD is
pointing to. Usage of usehead=1 is only allowed for local git
repositories, i.e. it must always be accompanied with protocol=file url
parameter.
[YOCTO #9351]
(Bitbake rev: 2673fac5a9d06de937101e3fb2ddf1e60ff99abf)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
The git annex fetcher needs git annex to be initialized. Previously
it was using 'git annex sync' to do this, but that has the downside
of moving the checkout to the tip of the default branch. This means
that tags, SRCREV, etc don't work in the gitannex case.
(Bitbake rev: c1a57e2dd7fc96834643be5591a96f239215481a)
Signed-off-by: Terry Boese <terry.boese@vecima.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Starting from tar 1.29 the --exclude option won't work
anymore if is not used before the path. There are some
fetch modules that copy the ptest using tar and --exclude
option. This fixes these for bitbake.
[YOCTO #9763]
(Bitbake rev: cc71d5d9da71ea5f21d02f3b2fbf119bd2d794f0)
Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
While switching from master to krogoth build with a common download directory,
got a large number of warnings like the one listed below:
WARNING: freetype-2.6.3-r0 do_fetch: Couldn't load checksums from
donestamp /home/maxin/downloads/freetype-2.6.3.tar.bz2.done: ValueError
(msg: unsupported pickle protocol: 4)
These warnings are caused by the difference in pickle module
implementation in python3(master) and python2(krogoth). Python2 supports
3 different protocols (0, 1, 2) and pickle.HIGHEST_PROTOCOL is 2 where as
Python3 supports 5 different protocols (0, 1, 2, 3, 4) and
pickle.HIGHEST_PROTOCOL is obviously 4.
My suggestion is to use 2 since it is backward compatible with python2
(all the supported distros for krogoth provides python2 which supports
pickle protocol version 2)
(Bitbake rev: cc67800f279fb211ee3bb4ea7009fdbb82973b02)
Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
When using a PREMIRROR with plain (non-unpack) files, a SRC_URI like
SRC_URI = "file://devmem2.c"
will cause devmem2.c to be a symlink in the WORKDIR pointing to the
local PREMIRROR.
Trying to apply a patch on this file will either modify the file on
the PREMIRROR or will fail due to sanity checks:
ERROR: devmem2-1.0-r7 do_patch: Command Error: 'quilt --quiltrc /cache/build-ubuntu/sysroots/x86_64-oe-linux/etc/quiltrc push' exited with 1 Output:
Applying patch devmem2-fixups-2.patch
File devmem2.c is not a regular file -- refusing to patch
(Bitbake rev: cfd481fe9799e7a4c6bfac32e56cc91cfcd81088)
Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Implement progress reporting support specifically for the fetchers. For
fetch tasks we don't necessarily know which fetcher will be used (we
might initially be fetching a git:// URI, but if we instead download a
mirror tarball we may fetch that over http using wget). These programs
also have different abilities as far as reporting progress goes (e.g.
wget gives us percentage complete and rate, git gives this some of the
time depending on what stage it's at). Additionally we filter out the
progress output before it makes it to the logs, in order to prevent the
logs filling up with junk.
At the moment this is only implemented for the wget and git fetchers
since they are the most commonly used (and svn doesn't seem to support
any kind of progress output, at least not without doing a relatively
expensive remote file listing first).
Line changes such as the ones you get in git's output as it progresses
don't make it to the log files, you only get the final state of the line
so the logs aren't filled with progress information that's useless after
the fact.
Part of the implementation for [YOCTO #5383].
(Bitbake rev: 4027649f422ee64b1c4e1ad8d48ac295050afbff)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Some services such as SourceForge seem to struggle to keep up under load, with
the result that over half of the autobuilder checkuri runs fail with
sourceforge.net "connection timed out".
Attempt to mitigate this by re-attempting once the network operation on failure.
(Bitbake rev: 54b1961551511948e0cbd2ac39f19b39b9cee568)
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
In recipes which use the perforce fetcher, enable use of SRCREV to
specify any of: ${AUTOREV}, changelist number, p4date, or label. This
is more in-line with how the other fetchers work for source control
systems.
Allow p4 to use the P4CONFIG env variable to define the server URL,
username, and password if not provided in a recipe.
This does change existing perforce fetcher usage by recipes and will
likely need those recipes which use the perforce fetcher to be updated.
No recipes in oe-core use the perforce fetcher.
References [YOCTO #6303]
(Bitbake rev: 6298696bb94a127cdec7964315f6891ba92cd026)
Signed-off-by: Andrew Bradford <andrew.bradford@kodakalaris.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>